title,authors,url,dateSubmitted,keyWords,abstract,paperId,source,keywords,Probability,Reasoning do anything now characterizing and evaluating inthewild jailbreak prompts on large language models,"['Xinyue Shen', 'Z. Chen', 'M. Backes', 'Yun Shen', 'Yang Zhang']",https://arxiv.org/pdf/2308.03825,2023-08-07,,"The misuse of large language models (LLMs) has garnered significant attention from the general public and LLM vendors. In response, efforts have been made to align LLMs with human values and intent use. However, a particular type of adversarial prompts, known as jailbreak prompt, has emerged and continuously evolved to bypass the safeguards and elicit harmful content from LLMs. In this paper, we conduct the first measurement study on jailbreak prompts in the wild, with 6,387 prompts collected from four platforms over six months. Leveraging natural language processing technologies and graph-based community detection methods, we discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from public platforms to private ones, posing new challenges for LLM vendors in proactive detection. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 46,800 samples across 13 forbidden scenarios. Our experiments show that current LLMs and safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify two highly effective jailbreak prompts which achieve 0.99 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and they have persisted online for over 100 days. Our work sheds light on the severe and evolving threat landscape of jailbreak prompts. We hope our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs.",1104d766527dead44a40532e8a89444d9cef5c65,Semantic Scholar,,, fuzzllm a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models,"['Dongyu Yao', 'Jianshu Zhang', 'Ian G. Harris', 'Marcel Carlsson']",https://arxiv.org/pdf/2309.05274,2023-09-11,,"Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit meticulously crafted prompts to elicit content that violates service guidelines, have captured the attention of research communities. While model owners can defend against individual jailbreak prompts through safety training strategies, this relatively passive approach struggles to handle the broader category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an automated fuzzing framework designed to proactively test and discover jailbreak vulnerabilities in LLMs. We utilize templates to capture the structural integrity of a prompt and isolate key features of a jailbreak class as constraints. By integrating different base classes into powerful combo attacks and varying the elements of constraints and prohibited questions, FuzzLLM enables efficient testing with reduced manual effort. Extensive experiments demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability discovery across various LLMs.",3c784cd3150a359e269c70cfbadd18774d66055d,Semantic Scholar,,, baseline defenses for adversarial attacks against aligned language models,"['Neel Jain', 'Avi Schwarzschild', 'Yuxin Wen', 'Gowthami Somepalli', 'John Kirchenbauer', 'Ping-yeh Chiang', 'Micah Goldblum', 'Aniruddha Saha', 'Jonas Geiping', 'Tom Goldstein']",https://arxiv.org/pdf/2309.00614,2023-09-01,,"As Large Language Models quickly become ubiquitous, it becomes critical to understand their security vulnerabilities. Recent work shows that text optimizers can produce jailbreaking prompts that bypass moderation and alignment. Drawing from the rich body of work on adversarial machine learning, we approach these attacks with three questions: What threat models are practically useful in this domain? How do baseline defense techniques perform in this new domain? How does LLM security differ from computer vision? We evaluate several baseline defense strategies against leading adversarial attacks on LLMs, discussing the various settings in which each is feasible and effective. Particularly, we look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training. We discuss white-box and gray-box settings and discuss the robustness-performance trade-off for each of the defenses considered. We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs. Future research will be needed to uncover whether more powerful optimizers can be developed, or whether the strength of filtering and preprocessing defenses is greater in the LLMs domain than it has been in computer vision.",3e30a7ac4886b28eb50151f58e14a1d698cccd0e,Semantic Scholar,,, latent jailbreak a benchmark for evaluating text safety and output robustness of large language models,"['Huachuan Qiu', 'Shuai Zhang', 'Anqi Li', 'Hongliang He', 'Zhenzhong Lan']",https://arxiv.org/pdf/2307.08487,2023-07-17,,"Considerable research efforts have been devoted to ensuring that large language models (LLMs) align with human values and generate safe text. However, an excessive focus on sensitivity to certain topics can compromise the model's robustness in following instructions, thereby impacting its overall performance in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily focused on evaluating the safety of the models without considering their robustness. In this paper, we propose a benchmark that assesses both the safety and robustness of LLMs, emphasizing the need for a balanced approach. To comprehensively study text safety and output robustness, we introduce a latent jailbreak prompt dataset, each involving malicious instruction embedding. Specifically, we instruct the model to complete a regular task, such as translation, with the text to be translated containing malicious instructions. To further analyze safety and robustness, we design a hierarchical annotation framework. We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions). Our results demonstrate that current LLMs not only prioritize certain instruction verbs but also exhibit varying jailbreak rates for different instruction verbs in explicit normal instructions. Code and data are available at https://github.com/qiuhuachuan/latent-jailbreak.",ace98e1e58bcc364afbb2feff6d136232f5f47da,Semantic Scholar,,, defending against alignmentbreaking attacks via robustly aligned llm,"['Bochuan Cao', 'Yu Cao', 'Lu Lin', 'Jinghui Chen']",https://arxiv.org/pdf/2309.14348,2023-09-18,,"Recently, Large Language Models (LLMs) have made significant advancements and are now widely used across various domains. Unfortunately, there has been a rising concern that LLMs can be misused to generate harmful or malicious content. Though a line of research has focused on aligning LLMs with human values and preventing them from producing inappropriate content, such alignments are usually vulnerable and can be bypassed by alignment-breaking attacks via adversarially optimized or handcrafted jailbreaking prompts. In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks. RA-LLM can be directly constructed upon an existing aligned LLM with a robust alignment checking function, without requiring any expensive retraining or fine-tuning process of the original LLM. Furthermore, we also provide a theoretical analysis for RA-LLM to verify its effectiveness in defending against alignment-breaking attacks. Through real-world experiments on open-source large language models, we demonstrate that RA-LLM can successfully defend against both state-of-the-art adversarial prompts and popular handcrafted jailbreaking prompts by reducing their attack success rates from nearly 100% to around 10% or less.",cd29c25c489562b409a60f83365f93f33ee1a0a1,Semantic Scholar,,, gptfuzzer red teaming large language models with autogenerated jailbreak prompts,"['Jiahao Yu', 'Xingwei Lin', 'Zheng Yu', 'Xinyu Xing']",https://arxiv.org/pdf/2309.10253,2023-09-19,,"Large language models (LLMs) have recently experienced tremendous popularity and are widely used from casual conversations to AI-driven programming. However, despite their considerable success, LLMs are not entirely reliable and can give detailed guidance on how to conduct harmful or illegal activities. While safety measures can reduce the risk of such outputs, adversarial jailbreak attacks can still exploit LLMs to produce harmful content. These jailbreak templates are typically manually crafted, making large-scale testing challenging. In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzing framework. Instead of manual engineering, GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs. At its core, GPTFuzz starts with human-written templates as initial seeds, then mutates them to produce new templates. We detail three key components of GPTFuzz: a seed selection strategy for balancing efficiency and variability, mutate operators for creating semantically equivalent or similar sentences, and a judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzz against various commercial and open-source LLMs, including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our results indicate that GPTFuzz consistently produces jailbreak templates with a high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates. We anticipate that GPTFuzz will be instrumental for researchers and practitioners in examining LLM robustness and will encourage further exploration into enhancing LLM safety.",d4177489596748e43aa571f59556097f2cc4c8be,Semantic Scholar,,, using large language models for cybersecurity capturetheflag challenges and certification questions,"['W. Tann', 'Yuancheng Liu', 'Jun Heng Sim', 'C. Seah', 'E. Chang']",https://arxiv.org/pdf/2308.10443,2023-08-21,,"The assessment of cybersecurity Capture-The-Flag (CTF) exercises involves participants finding text strings or ``flags'' by exploiting system vulnerabilities. Large Language Models (LLMs) are natural-language models trained on vast amounts of words to understand and generate text; they can perform well on many CTF challenges. Such LLMs are freely available to students. In the context of CTF exercises in the classroom, this raises concerns about academic integrity. Educators must understand LLMs' capabilities to modify their teaching to accommodate generative AI assistance. This research investigates the effectiveness of LLMs, particularly in the realm of CTF challenges and questions. Here we evaluate three popular LLMs, OpenAI ChatGPT, Google Bard, and Microsoft Bing. First, we assess the LLMs' question-answering performance on five Cisco certifications with varying difficulty levels. Next, we qualitatively study the LLMs' abilities in solving CTF challenges to understand their limitations. We report on the experience of using the LLMs for seven test cases in all five types of CTF challenges. In addition, we demonstrate how jailbreak prompts can bypass and break LLMs' ethical safeguards. The paper concludes by discussing LLM's impact on CTF exercises and its implications.",e64df7e9448f7a9a4cb5d22c21c460134c8646ac,Semantic Scholar,,, autodan generating stealthy jailbreak prompts on aligned large language models,"['Xiaogeng Liu', 'Nan Xu', 'Muhao Chen', 'Chaowei Xiao']",https://arxiv.org/pdf/2310.04451,2023-10-03,,"The aligned Large Language Models (LLMs) are powerful language understanding and decision-making tools that are created through extensive alignment with human feedback. However, these large models remain susceptible to jailbreak attacks, where adversaries manipulate prompts to elicit malicious outputs that should not be given by aligned LLMs. Investigating jailbreak prompts can lead us to delve into the limitations of LLMs and further guide us to secure them. Unfortunately, existing jailbreak techniques suffer from either (1) scalability issues, where attacks heavily rely on manual crafting of prompts, or (2) stealthiness problems, as attacks depend on token-based algorithms to generate prompts that are often semantically meaningless, making them susceptible to detection through basic perplexity testing. In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts? In this paper, we introduce AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN not only automates the process while preserving semantic meaningfulness, but also demonstrates superior attack strength in cross-model transferability, and cross-sample universality compared with the baseline. Moreover, we also compare AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass them effectively.",f3f23f7f9f5369aade19f20bc5d028cce7b9c9aa,Semantic Scholar,,, jailbreaking chatgpt via prompt engineering an empirical study,"['Yi Liu', 'Gelei Deng', 'Zhengzi Xu', 'Yuekang Li', 'Yaowen Zheng', 'Ying Zhang', 'Lida Zhao', 'Tianwei Zhang', 'Yang Liu']",http://arxiv.org/pdf/2305.13860,2023-05-23,,"Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse. Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts. Initially, we develop a classification model to analyze the distribution of existing prompts, identifying ten distinct patterns and three categories of jailbreak prompts. Subsequently, we assess the jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a dataset of 3,120 jailbreak questions across eight prohibited scenarios. Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, finding that the prompts can consistently evade the restrictions in 40 use-case scenarios. The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.",fc50a6202e2f675604543c1ae4ef22ec74f61ad5,Semantic Scholar,,, decomposed prompting a modular approach for solving complex tasks,"['Tushar Khot', 'H. Trivedi', 'Matthew Finlayson', 'Yao Fu', 'Kyle Richardson', 'Peter Clark', 'Ashish Sabharwal']",http://arxiv.org/pdf/2210.02406,2022-10-05,,"Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.",07955e96cbd778d0ae2a68f09d073b866dd84c2a,Semantic Scholar,,, blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing,"['Chen Wang', 'Minpeng Liao', 'Zhongqiang Huang', 'Jinliang Lu', 'Junhong Wu', 'Yuchen Liu', 'Chengqing Zong', 'Jiajun Zhang']",https://arxiv.org/pdf/2309.00916,2023-09-02,,"The emergence of large language models (LLMs) has sparked significant interest in extending their remarkable language capabilities to speech. However, modality alignment between speech and text still remains an open problem. Current solutions can be categorized into two strategies. One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text. The other is an end-to-end approach that relies on speech instruction data, which is very difficult to collect in large quantities. In this paper, we address these issues and propose the BLSP approach that Bootstraps Language-Speech Pre-training via behavior alignment of continuation writing. We achieve this by learning a lightweight modality adapter between a frozen speech encoder and an LLM, ensuring that the LLM exhibits the same generation behavior regardless of the modality of input: a speech segment or its transcript. The training process can be divided into two steps. The first step prompts an LLM to generate texts with speech transcripts as prefixes, obtaining text continuations. In the second step, these continuations are used as supervised signals to train the modality adapter in an end-to-end manner. We demonstrate that this straightforward process can extend the capabilities of LLMs to speech, enabling speech recognition, speech translation, spoken language understanding, and speech conversation, even in zero-shot cross-lingual scenarios.",204fd6c5e247c477d607f507ee01d94a8dbd408f,Semantic Scholar,,, howtocaption prompting llms to transform video annotations at scale,"['Nina Shvetsova', 'Anna Kukleva', 'Xudong Hong', 'Christian Rupprecht', 'B. Schiele', 'Hilde Kuehne']",https://arxiv.org/pdf/2310.04900,2023-10-07,,"Instructional videos are an excellent source for learning multimodal representations by leveraging video-subtitle pairs extracted with automatic speech recognition systems (ASR) from the audio signal in the videos. However, in contrast to human-annotated captions, both speech and subtitles naturally differ from the visual content of the videos and thus provide only noisy supervision for multimodal learning. As a result, large-scale annotation-free web video training data remains sub-optimal for training text-video models. In this work, we propose to leverage the capability of large language models (LLMs) to obtain fine-grained video descriptions aligned with videos. Specifically, we prompt an LLM to create plausible video descriptions based on ASR narrations of the video for a large-scale instructional video dataset. To this end, we introduce a prompting method that is able to take into account a longer text of subtitles, allowing us to capture context beyond a single sentence. To align the captions to the video temporally, we prompt the LLM to generate timestamps for each produced caption based on the subtitles. In this way, we obtain human-style video captions at scale without human supervision. We apply our method to the subtitles of the HowTo100M dataset, creating a new large-scale dataset, HowToCaption. Our evaluation shows that the resulting captions not only significantly improve the performance over many different benchmark datasets for text-video retrieval but also lead to a disentangling of textual narration from the audio, boosting performance in text-video-audio tasks.",24dd96da6f700f57132713aeb5e9b06905abab5d,Semantic Scholar,,, algo synthesizing algorithmic programs with generated oracle verifiers,"['Kexun Zhang', 'Danqing Wang', 'Jingtao Xia', 'William Yang Wang', 'Lei Li']",http://arxiv.org/pdf/2305.14591,2023-05-24,,"Large language models (LLMs) excel at implementing code from functionality descriptions but struggle with algorithmic problems that require not only implementation but also identification of the suitable algorithm. Moreover, LLM-generated programs lack guaranteed correctness and require human verification. To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness. ALGO first generates a reference oracle by prompting an LLM to exhaustively enumerate all the combinations of relevant variables. This oracle is then utilized to guide an arbitrary search strategy in exploring the algorithm space and to verify the synthesized algorithms. Our study shows that the LLM-generated oracles are correct for 88% of the cases. With the oracles as verifiers, ALGO can be integrated with any existing code generation model in a model-agnostic manner to enhance its performance. Experiments show that when equipped with ALGO, we achieve an 8x better one-submission pass rate over the Codex model and a 2.6x better one-submission pass rate over CodeT, the current state-of-the-art model on CodeContests. We can also get 1.3x better pass rate over the ChatGPT Code Interpreter on unseen problems. The problem set we used for testing, the prompts we used, the verifier and solution programs, and the test cases generated by ALGO are available at https://github.com/zkx06111/ALGO.",2bb4fe9bc10dbf1ea70135e52452f9f63bb10671,Semantic Scholar,,, model tuning or prompt tuning a study of large language models for clinical concept and relation extraction,"['C.A.I. Peng', 'Xi Yang', 'Kaleb E Smith', 'Zehao Yu', 'Aokun Chen', 'Jiang Bian', 'Yonghui Wu']",https://arxiv.org/pdf/2310.06239,2023-10-10,,"Objective To develop soft prompt-based learning algorithms for large language models (LLMs), examine the shape of prompts, prompt-tuning using frozen/unfrozen LLMs, transfer learning, and few-shot learning abilities. Methods We developed a soft prompt-based LLM model and compared 4 training strategies including (1) fine-tuning without prompts; (2) hard-prompt with unfrozen LLMs; (3) soft-prompt with unfrozen LLMs; and (4) soft-prompt with frozen LLMs. We evaluated 7 pretrained LLMs using the 4 training strategies for clinical concept and relation extraction on two benchmark datasets. We evaluated the transfer learning ability of the prompt-based learning algorithms in a cross-institution setting. We also assessed the few-shot learning ability. Results and Conclusion When LLMs are unfrozen, GatorTron-3.9B with soft prompting achieves the best strict F1-scores of 0.9118 and 0.8604 for concept extraction, outperforming the traditional fine-tuning and hard prompt-based models by 0.6~3.1% and 1.2~2.9%, respectively; GatorTron-345M with soft prompting achieves the best F1-scores of 0.8332 and 0.7488 for end-to-end relation extraction, outperforming the other two models by 0.2~2% and 0.6~11.7%, respectively. When LLMs are frozen, small (i.e., 345 million parameters) LLMs have a big gap to be competitive with unfrozen models; scaling LLMs up to billions of parameters makes frozen LLMs competitive with unfrozen LLMs. For cross-institute evaluation, soft prompting with a frozen GatorTron-8.9B model achieved the best performance. This study demonstrates that (1) machines can learn soft prompts better than humans, (2) frozen LLMs have better few-shot learning ability and transfer learning ability to facilitate muti-institution applications, and (3) frozen LLMs require large models.",2f75de70511fa9f5c7a1e7f61f2d7928d121adbf,Semantic Scholar,,, thinksum probabilistic reasoning over sets using large language models,"['Batu Mehmet Ozturkler', 'Nikolay Malkin', 'Zhen Wang', 'N. Jojic']",http://arxiv.org/pdf/2210.01293,2022-10-04,,"Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). However, recent studies show that even the more advanced LLMs fail in scenarios that require reasoning over multiple objects or facts and making sequences of logical deductions. We propose a two-stage probabilistic inference paradigm, ThinkSum, which reasons over sets of objects or facts in a structured manner. In the first stage (Think – retrieval of associations), a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage (Sum – probabilistic inference or reasoning), the results of these queries are aggregated to make the final prediction. We demonstrate the possibilities and advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks, achieving improvements over the state of the art using GPT-family models on thirteen difficult tasks, often with far smaller model variants. We also compare and contrast ThinkSum with other proposed modifications to direct prompting of LLMs, such as variants of chain-of-thought prompting. Our results suggest that because the probabilistic inference in ThinkSum is performed outside of calls to the LLM, ThinkSum is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs. Overall, our proposed paradigm represents a promising approach for enhancing the reasoning capabilities of LLMs.",370cea8b4220917f45a69358c0303df71f5063c7,Semantic Scholar,,, divide and prompt chain of thought prompting for texttosql,"['X. Liu', 'Zhao Tan']",http://arxiv.org/pdf/2304.11556,2023-04-23,,"Chain-of-thought (CoT) prompting combined with large language models (LLMs) have achieved encouraging results on complex reasoning tasks. Text-to-SQL is a critical semantic parsing task that converts natural language questions into SQL statements, involving a complex reasoning process. However, there is little work about using CoT prompting to activate LLM's reasoning capabilities on Text-to-SQL tasks. In this work, we propose a new paradigm for prompting Text-to-SQL tasks, called Divide-and-Prompt, which first divides the task into subtasks, and then approach each subtask through CoT. We present 3 prompting-based methods to enhance the Text-to-SQL ability of LLMs. Experiments show that these prompts guide LLMs to generate Text-to-SQL with higher execution accuracy.",40c9280d87059c0cc28f2a08d46a7045fa3e9736,Semantic Scholar,,, taggpt large language models are zeroshot multimodal taggers,"['Chen Li', 'Yixiao Ge', 'Jiayong Mao', 'Dian Li', 'Ying Shan']",http://arxiv.org/pdf/2304.03022,2023-04-06,,"Tags are pivotal in facilitating the effective distribution of multimedia content in various applications in the contemporary Internet era, such as search engines and recommendation systems. Recently, large language models (LLMs) have demonstrated impressive capabilities across a wide range of tasks. In this work, we propose TagGPT, a fully automated system capable of tag extraction and multimodal tagging in a completely zero-shot fashion. Our core insight is that, through elaborate prompt engineering, LLMs are able to extract and reason about proper tags given textual clues of multimodal data, e.g., OCR, ASR, title, etc. Specifically, to automatically build a high-quality tag set that reflects user intent and interests for a specific application, TagGPT predicts large-scale candidate tags from a series of raw data via prompting LLMs, filtered with frequency and semantics. Given a new entity that needs tagging for distribution, TagGPT introduces two alternative options for zero-shot tagging, i.e., a generative method with late semantic matching with the tag set, and another selective method with early matching in prompts. It is well noticed that TagGPT provides a system-level solution based on a modular framework equipped with a pre-trained LLM (GPT-3.5 used here) and a sentence embedding model (SimCSE used here), which can be seamlessly replaced with any more advanced one you want. TagGPT is applicable for various modalities of data in modern social media and showcases strong generalization ability to a wide range of applications. We evaluate TagGPT on publicly available datasets, i.e., Kuaishou and Food.com, and demonstrate the effectiveness of TagGPT compared to existing hashtags and off-the-shelf taggers. Project page: https://github.com/TencentARC/TagGPT.",4895d443c36bd136a818be2db34442354ba408d1,Semantic Scholar,,, humanintheloop machine translation with large language model,"['Xinyi Yang', 'Runzhe Zhan', 'Derek F. Wong', 'Junchao Wu', 'Lidia S. Chao']",https://arxiv.org/pdf/2310.08908,2023-10-13,,"The large language model (LLM) has garnered significant attention due to its in-context learning mechanisms and emergent capabilities. The research community has conducted several pilot studies to apply LLMs to machine translation tasks and evaluate their performance from diverse perspectives. However, previous research has primarily focused on the LLM itself and has not explored human intervention in the inference process of LLM. The characteristics of LLM, such as in-context learning and prompt engineering, closely mirror human cognitive abilities in language tasks, offering an intuitive solution for human-in-the-loop generation. In this study, we propose a human-in-the-loop pipeline that guides LLMs to produce customized outputs with revision instructions. The pipeline initiates by prompting the LLM to produce a draft translation, followed by the utilization of automatic retrieval or human feedback as supervision signals to enhance the LLM’s translation through in-context learning. The human-machine interactions generated in this pipeline are also stored in an external database to expand the in-context retrieval database, enabling us to leverage human supervision in an offline setting. We evaluate the proposed pipeline using the GPT-3.5-turbo API on five domain-specific benchmarks for German-English translation. The results demonstrate the effectiveness of the pipeline in tailoring in-domain translations and improving translation performance compared to direct translation instructions. Additionally, we discuss the experimental results from the following perspectives: 1) the effectiveness of different in-context retrieval methods; 2) the construction of a retrieval database under low-resource scenarios; 3) the observed differences across selected domains; 4) the quantitative analysis of sentence-level and word-level statistics; and 5) the qualitative analysis of representative translation cases.",4950bf6f873ba1409a7bbad25cf5c93c8f833453,Semantic Scholar,,, large language models vote prompting for rare disease identification,"['David Oniani', 'Jordan Hilsman', 'Hang Dong', 'F. Gao', 'Shiven Verma', 'Yanshan Wang']",https://arxiv.org/pdf/2308.12890,2023-08-24,,"The emergence of generative Large Language Models (LLMs) emphasizes the need for accurate and efficient prompting approaches. LLMs are often applied in Few-Shot Learning (FSL) contexts, where tasks are executed with minimal training data. FSL has become popular in many Artificial Intelligence (AI) subdomains, including AI for health. Rare diseases affect a small fraction of the population. Rare disease identification from clinical notes inherently requires FSL techniques due to limited data availability. Manual data collection and annotation is both expensive and time-consuming. In this paper, we propose Models-Vote Prompting (MVP), a flexible prompting approach for improving the performance of LLM queries in FSL settings. MVP works by prompting numerous LLMs to perform the same tasks and then conducting a majority vote on the resulting outputs. This method achieves improved results to any one model in the ensemble on one-shot rare disease identification and classification tasks. We also release a novel rare disease dataset for FSL, available to those who signed the MIMIC-IV Data Use Agreement (DUA). Furthermore, in using MVP, each model is prompted multiple times, substantially increasing the time needed for manual annotation, and to address this, we assess the feasibility of using JSON for automating generative LLM evaluation.",4b091d92f793161046b483ee93df244bf93bb508,Semantic Scholar,,, hypothesis search inductive reasoning with language models,"['Ruocheng Wang', 'E. Zelikman', 'Gabriel Poesia', 'Yewen Pu', 'Nick Haber', 'Noah D. Goodman']",https://arxiv.org/pdf/2309.05660,2023-09-11,,"Inductive reasoning is a core problem-solving capacity: humans can identify underlying principles from a few examples, which can then be robustly generalized to novel scenarios. Recent work has evaluated large language models (LLMs) on inductive reasoning tasks by directly prompting them yielding""in context learning.""This can work well for straightforward inductive tasks, but performs very poorly on more complex tasks such as the Abstraction and Reasoning Corpus (ARC). In this work, we propose to improve the inductive reasoning ability of LLMs by generating explicit hypotheses at multiple levels of abstraction: we prompt the LLM to propose multiple abstract hypotheses about the problem, in natural language, then implement the natural language hypotheses as concrete Python programs. These programs can be directly verified by running on the observed examples and generalized to novel inputs. Because of the prohibitive cost of generation with state-of-the-art LLMs, we consider a middle step to filter the set of hypotheses that will be implemented into programs: we either ask the LLM to summarize into a smaller set of hypotheses, or ask human annotators to select a subset of the hypotheses. We verify our pipeline's effectiveness on the ARC visual inductive reasoning benchmark, its variant 1D-ARC, and string transformation dataset SyGuS. On a random 40-problem subset of ARC, our automated pipeline using LLM summaries achieves 27.5% accuracy, significantly outperforming the direct prompting baseline (accuracy of 12.5%). With the minimal human input of selecting from LLM-generated candidates, the performance is boosted to 37.5%. (And we argue this is a lower bound on the performance of our approach without filtering.) Our ablation studies show that abstract hypothesis generation and concrete program representations are both beneficial for LLMs to perform inductive reasoning tasks.",4cf527e9e0d68e3fc16d39fbcdb3869cd3ccf60f,Semantic Scholar,,, pearl prompting large language models to plan and execute actions over long documents,"['Simeng Sun', 'Y. Liu', 'Shuo Wang', 'Chenguang Zhu', 'Mohit Iyyer']",http://arxiv.org/pdf/2305.14564,2023-05-23,,"Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents.",4ee96f0757e517928590a2300af5d40ba768a5a7,Semantic Scholar,,, aligning language models to user opinions,"['EunJeong Hwang', 'Bodhisattwa Prasad Majumder', 'Niket Tandon']",http://arxiv.org/pdf/2305.14929,2023-05-24,,"An important aspect of developing LLMs that interact with humans is to align models' behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user group or ideological persona the model captured during its pertaining stage. But, how to best align an LLM with a specific user and not a demographic or ideological group remains an open question. Mining public opinion surveys (by Pew Research), we find that the opinions of a user and their demographics and ideologies are not mutual predictors. We use this insight to align LLMs by modeling both user opinions as well as user demographics and ideology, achieving up to 7 points accuracy gains in predicting public opinions from survey questions across a broad set of topics. In addition to the typical approach of prompting LLMs with demographics and ideology, we discover that utilizing the most relevant past opinions from individual users enables the model to predict user opinions more accurately.",5db0f55332839c408e3049cea1a6ad48fefba70c,Semantic Scholar,,, user simulation with large language models for evaluating taskoriented dialogue,"['Sam Davidson', 'Salvatore Romeo', 'Raphael Shu', 'James Gung', 'Arshit Gupta', 'Saab Mansour', 'Yi Zhang']",https://arxiv.org/pdf/2309.13233,2023-09-23,,"One of the major impediments to the development of new task-oriented dialogue (TOD) systems is the need for human evaluation at multiple stages and iterations of the development process. In an effort to move toward automated evaluation of TOD, we propose a novel user simulator built using recently developed large pretrained language models (LLMs). In order to increase the linguistic diversity of our system relative to the related previous work, we do not fine-tune the LLMs used by our system on existing TOD datasets; rather we use in-context learning to prompt the LLMs to generate robust and linguistically diverse output with the goal of simulating the behavior of human interlocutors. Unlike previous work, which sought to maximize goal success rate (GSR) as the primary metric of simulator performance, our goal is a system which achieves a GSR similar to that observed in human interactions with TOD systems. Using this approach, our current simulator is effectively able to interact with several TOD systems, especially on single-intent conversational goals, while generating lexically and syntactically diverse output relative to previous simulators that rely upon fine-tuned models. Finally, we collect a Human2Bot dataset of humans interacting with the same TOD systems with which we experimented in order to better quantify these achievements.",64e9e1686cf85db163f007a8621e2c1b24d86feb,Semantic Scholar,,, booookscore a systematic exploration of booklength summarization in the era of llms,"['Yapei Chang', 'Kyle Lo', 'Tanya Goyal', 'Mohit Iyyer']",https://arxiv.org/pdf/2310.00785,2023-10-01,,"Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than the oft-repetitive ones generated by LLaMA 2. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by human annotators. We release code and annotations after blind review to spur more principled research on book-length summarization.",65fe385a665480b41fafc56d76a3bd72e92e8886,Semantic Scholar,,, reranking for natural language generation from logical forms a study based on large language models,"['Levon Haroutunian', 'Zhuang Li', 'Lucian Galescu', 'Philip R. Cohen', 'Raj Tumuluri', 'Gholamreza Haffari']",https://arxiv.org/pdf/2309.12294,2023-09-21,,"Large language models (LLMs) have demonstrated impressive capabilities in natural language generation. However, their output quality can be inconsistent, posing challenges for generating natural language from logical forms (LFs). This task requires the generated outputs to embody the exact semantics of LFs, without missing any LF semantics or creating any hallucinations. In this work, we tackle this issue by proposing a novel generate-and-rerank approach. Our approach involves initially generating a set of candidate outputs by prompting an LLM and subsequently reranking them using a task-specific reranker model. In addition, we curate a manually collected dataset to evaluate the alignment between different ranking metrics and human judgements. The chosen ranking metrics are utilized to enhance the training and evaluation of the reranker model. By conducting extensive experiments on three diverse datasets, we demonstrate that the candidates selected by our reranker outperform those selected by baseline methods in terms of semantic consistency and fluency, as measured by three comprehensive metrics. Our findings provide strong evidence for the effectiveness of our approach in improving the quality of generated outputs.",6be6fe206f8ca735f8df26758bf877572abb10d3,Semantic Scholar,,, not what you've signed up for compromising realworld llmintegrated applications with indirect prompt injection,"['Kai Greshake', 'Sahar Abdelnabi', 'Shailesh Mishra', 'C. Endres', 'Thorsten Holz', 'Mario Fritz']",https://arxiv.org/pdf/2302.12173,2023-02-23,,"Large Language Models (LLMs) are increasingly being integrated into applications, with versatile functionalities that can be easily modulated via natural language prompts. So far, it was assumed that the user is directly prompting the LLM. But, what if it is not the user prompting? We show that LLM-Integrated Applications blur the line between data and instructions and reveal several new attack vectors, using Indirect Prompt Injection, that enable adversaries to remotely (i.e., without a direct interface) exploit LLM-integrated applications by strategically injecting prompts into data likely to be retrieved at inference time. We derive a comprehensive taxonomy from a computer security perspective to broadly investigate impacts and vulnerabilities, including data theft, worming, information ecosystem contamination, and other novel security risks. We then demonstrate the practical viability of our attacks against both real-world systems, such as Bing Chat and code-completion engines, and GPT-4 synthetic applications. We show how processing retrieved prompts can act as arbitrary code execution, manipulate the application's functionality, and control how and if other APIs are called. Despite the increasing reliance on LLMs, effective mitigations of these emerging threats are lacking. By raising awareness of these vulnerabilities, we aim to promote the safe and responsible deployment of these powerful models and the development of robust defenses that protect users from potential attacks.",705e49afd92130f2bc1e0d4d0b1f6cb14e88803f,Semantic Scholar,,, leveraging large language models for exploiting asr uncertainty,"['Pranay Dighe', 'Yi Su', 'Shangshang Zheng', 'Yunshu Liu', 'Vineet Garg', 'Xiaochuan Niu', 'Ahmed H. Tewfik']",https://arxiv.org/pdf/2309.04842,2023-09-09,,"While large language models excel in a variety of natural language processing (NLP) tasks, to perform well on spoken language understanding (SLU) tasks, they must either rely on off-the-shelf automatic speech recognition (ASR) systems for transcription, or be equipped with an in-built speech modality. This work focuses on the former scenario, where LLM's accuracy on SLU tasks is constrained by the accuracy of a fixed ASR system on the spoken input. Specifically, we tackle speech-intent classification task, where a high word-error-rate can limit the LLM's ability to understand the spoken intent. Instead of chasing a high accuracy by designing complex or specialized architectures regardless of deployment costs, we seek to answer how far we can go without substantially changing the underlying ASR and LLM, which can potentially be shared by multiple unrelated tasks. To this end, we propose prompting the LLM with an n-best list of ASR hypotheses instead of only the error-prone 1-best hypothesis. We explore prompt-engineering to explain the concept of n-best lists to the LLM; followed by the finetuning of Low-Rank Adapters on the downstream tasks. Our approach using n-best lists proves to be effective on a device-directed speech detection task as well as on a keyword spotting task, where systems using n-best list prompts outperform those using 1-best ASR hypothesis; thus paving the way for an efficient method to exploit ASR uncertainty via LLMs for speech-based applications.",72fb75f7c38a83424308c8205bb36cd88995494b,Semantic Scholar,,, language models are weak learners,"['Hariharan Manikandan', 'Yiding Jiang', 'J. Z. Kolter']",http://arxiv.org/pdf/2306.14101,2023-06-25,,"A central notion in practical and theoretical machine learning is that of a $\textit{weak learner}$, classifiers that achieve better-than-random performance (on any given distribution over data), even by a small margin. Such weak learners form the practical basis for canonical machine learning methods such as boosting. In this work, we illustrate that prompt-based large language models can operate effectively as said weak learners. Specifically, we illustrate the use of a large language model (LLM) as a weak learner in a boosting algorithm applied to tabular data. We show that by providing (properly sampled according to the distribution of interest) text descriptions of tabular data samples, LLMs can produce a summary of the samples that serves as a template for classification and achieves the aim of acting as a weak learner on this task. We incorporate these models into a boosting approach, which in some settings can leverage the knowledge within the LLM to outperform traditional tree-based boosting. The model outperforms both few-shot learning and occasionally even more involved fine-tuning procedures, particularly for tasks involving small numbers of data points. The results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.",7d87fbdfbf5038a4e0ff09801b6d3b8a2e0c613a,Semantic Scholar,,, connecting large language models with evolutionary algorithms yields powerful prompt optimizers,"['Qingyan Guo', 'Rui Wang', 'Junliang Guo', 'Bei Li', 'Kaitao Song', 'Xu Tan', 'Guoqing Liu', 'Jiang Bian', 'Yujiu Yang', 'Tsinghua University', 'Microsoft Research']",https://arxiv.org/pdf/2309.08532,2023-09-15,,"Large Language Models (LLMs) excel in various tasks, but they rely on carefully crafted prompts that often demand substantial human effort. To automate this process, in this paper, we propose a novel framework for discrete prompt optimization, called EvoPrompt, which borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence. To enable EAs to work on discrete prompts, which are natural language expressions that need to be coherent and human-readable, we connect LLMs with EAs. This approach allows us to simultaneously leverage the powerful language processing capabilities of LLMs and the efficient optimization performance of EAs. Specifically, abstaining from any gradients or parameters, EvoPrompt starts from a population of prompts and iteratively generates new prompts with LLMs based on the evolutionary operators, improving the population based on the development set. We optimize prompts for both closed- and open-source LLMs including GPT-3.5 and Alpaca, on 9 datasets spanning language understanding and generation tasks. EvoPrompt significantly outperforms human-engineered prompts and existing methods for automatic prompt generation by up to 25% and 14% respectively. Furthermore, EvoPrompt demonstrates that connecting LLMs with EAs creates synergies, which could inspire further research on the combination of LLMs and conventional algorithms.",8d17234680db76f99efd22fbcb169f45d2d79d93,Semantic Scholar,,, marked personas using natural language prompts to measure stereotypes in language models,"['Myra Cheng', 'Esin Durmus', 'Dan Jurafsky']",http://arxiv.org/pdf/2305.18189,2023-05-29,,"To recognize and mitigate harms from large language models (LLMs), we need to understand the prevalence and nuances of stereotypes in LLM outputs. Toward this end, we present Marked Personas, a prompt-based method to measure stereotypes in LLMs for intersectional demographic groups without any lexicon or data labeling.Grounded in the sociolinguistic concept of markedness (which characterizes explicitly linguistically marked categories versus unmarked defaults), our proposed method is twofold: 1) prompting an LLM to generate personas, i.e., natural language descriptions, of the target demographic group alongside personas of unmarked, default groups; 2) identifying the words that significantly distinguish personas of the target group from corresponding unmarked ones.We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written portrayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An intersectional lens further reveals tropes that dominate portrayals of marginalized groups, such as tropicalism and the hypersexualization of minoritized women. These representational harms have concerning implications for downstream applications like story generation.",8d9ca1e2c703e2752a4904c967a65d45d0bef5f6,Semantic Scholar,,, promptner prompting for named entity recognition,"['D. Ashok', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2305.15444,2023-05-24,,"In a surprising turn, Large Language Models (LLMs) together with a growing arsenal of prompt-based heuristics now offer powerful off-the-shelf approaches providing few-shot solutions to myriad classic NLP problems. However, despite promising early results, these LLM-based few-shot methods remain far from the state of the art in Named Entity Recognition (NER), where prevailing methods include learning representations via end-to-end structural understanding and fine-tuning on standard labeled corpora. In this paper, we introduce PromptNER, a new state-of-the-art algorithm for few-Shot and cross-domain NER. To adapt to any new NER task PromptNER requires a set of entity definitions in addition to the standard few-shot examples. Given a sentence, PromptNER prompts an LLM to produce a list of potential entities along with corresponding explanations justifying their compatibility with the provided entity type definitions. Remarkably, PromptNER achieves state-of-the-art performance on few-shot NER, achieving a 4% (absolute) improvement in F1 score on the ConLL dataset, a 9% (absolute) improvement on the GENIA dataset, and a 4% (absolute) improvement on the FewNERD dataset. PromptNER also moves the state of the art on Cross Domain NER, outperforming prior methods (including those not limited to the few-shot setting), setting a new mark on 3/5 CrossNER target domains, with an average F1 gain of 3%, despite using less than 2% of the available data.",9141480721653789597b6e537ee0eeab401f3e60,Semantic Scholar,,, boosting theoryofmind performance in large language models via prompting,"['Shima Rahimi Moghaddam', 'C. Honey']",http://arxiv.org/pdf/2304.11490,2023-04-22,,"Large language models (LLMs) excel in many tasks in 2023, but they still face challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension. We evaluated prompts featuring two-shot chain of thought reasoning and step-by-step thinking instructions. We found that LLMs trained with Reinforcement Learning from Human Feedback (RLHF) (all models excluding Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell short of the 87% human accuracy on the test set. However, when supplied with prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate prompting enhances LLM ToM reasoning, and they underscore the context-dependent nature of LLM cognitive capacities.",96d6bb5d6abdeda9b2db9af6296527200ba7aa32,Semantic Scholar,,, copilot for xcode exploring aiassisted programming by prompting cloudbased large language models,"['C. Tan', 'Shangxin Guo', 'M. Wong', 'Ching Nam Hang']",https://arxiv.org/pdf/2307.14349,2023-07-08,,"This paper presents an AI-assisted programming tool called Copilot for Xcode for program composition and design to support human software developers. By seamlessly integrating cloud-based Large Language Models (LLM) with Apple's local development environment, Xcode, this tool enhances productivity and unleashes creativity for software development in Apple software ecosystem (e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP) techniques, Copilot for Xcode effectively processes source code tokens and patterns within code repositories, enabling features such as code generation, autocompletion, documentation, and error detection. Software developers can also query and make""small""decisions for program composition, some of which can be made simultaneously, and this is facilitated through prompt engineering in a chat interface of Copilot for Xcode. Finally, we present simple case studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt popular LLM services like OpenAI ChatGPT for program composition and design.",a3509cef906a4517238c1764676cf637efcd1d5e,Semantic Scholar,,, codeie large code generation models are better fewshot information extractors,"['Peng Li', 'Tianxiang Sun', 'Qiong Tang', 'Hang Yan', 'Yuanbin Wu', 'Xuanjing Huang', 'Xipeng Qiu Academy for EngineeringTechnology', 'Fudan University', 'School of Materials Science', 'Technology', 'East China Normal University']",http://arxiv.org/pdf/2305.05711,2023-05-09,,"Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.",a86dd6c62d3dc9c7989c98a3e4ace3fd8000e515,Semantic Scholar,,, zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis,"['Md. Arid Hasan', 'Shudipta Das', 'Afiyat Anjum', 'Firoj Alam', 'Anika Anjum', 'Avijit Sarker', 'S. R. H. Noori']",https://arxiv.org/pdf/2308.10783,2023-08-21,,"The rapid expansion of the digital world has propelled sentiment analysis into a critical tool across diverse sectors such as marketing, politics, customer service, and healthcare. While there have been significant advancements in sentiment analysis for widely spoken languages, low-resource languages, such as Bangla, remain largely under-researched due to resource constraints. Furthermore, the recent unprecedented performance of Large Language Models (LLMs) in various applications highlights the need to evaluate them in the context of low-resource languages. In this study, we present a sizeable manually annotated dataset encompassing 33,605 Bangla news tweets and Facebook comments. We also investigate zero- and few-shot in-context learning with several language models, including Flan-T5, GPT-4, and Bloomz, offering a comparative analysis against fine-tuned models. Our findings suggest that monolingual transformer-based models consistently outperform other models, even in zero and few-shot scenarios. To foster continued exploration, we intend to make this dataset and our research tools publicly available to the broader research community. In the spirit of further research, we plan to make this dataset and our experimental resources publicly accessible to the wider research community.",bc70af9248d210663edf22e5fc84ca9313c697b0,Semantic Scholar,,, progprompt generating situated robot task plans using large language models,"['Ishika Singh', 'Valts Blukis', 'Arsalan Mousavian', 'Ankit Goyal', 'Danfei Xu', 'Jonathan Tremblay', 'D. Fox', 'Jesse Thomason', 'Animesh Garg']",https://arxiv.org/pdf/2209.11302,2022-09-22,,"Task planning can require defining myriad domain knowledge about the world in which a robot needs to act. To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information. However, such methods either require enumerating all possible next steps for scoring, or generate free-form text that may contain actions not possible on a given robot in its current context. We present a programmatic LLM prompt structure that enables plan generation functional across situated environments, robot capabilities, and tasks. Our key insight is to prompt the LLM with program-like specifications of the available actions and objects in an environment, as well as with example programs that can be executed. We make concrete recommendations about prompt structure and generation constraints through ablation experiments, demonstrate state of the art success rates in VirtualHome household tasks, and deploy our method on a physical robot arm for tabletop tasks. Website at progprompt.github.io",c03fa01fbb9c77fe3d10609ba5f1dee33a723867,Semantic Scholar,,, large language models can accomplish business process management tasks,"['Michael Grohs', 'Luka Abb', 'Nourhan Elsayed', 'Jana-Rebecca Rehse']",https://arxiv.org/pdf/2307.09923,2023-07-19,,"Business Process Management (BPM) aims to improve organizational activities and their outcomes by managing the underlying processes. To achieve this, it is often necessary to consider information from various sources, including unstructured textual documents. Therefore, researchers have developed several BPM-specific solutions that extract information from textual documents using Natural Language Processing techniques. These solutions are specific to their respective tasks and cannot accomplish multiple process-related problems as a general-purpose instrument. However, in light of the recent emergence of Large Language Models (LLMs) with remarkable reasoning capabilities, such a general-purpose instrument with multiple applications now appears attainable. In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by applying a specific LLM to three exemplary tasks: mining imperative process models from textual descriptions, mining declarative process models from textual descriptions, and assessing the suitability of process tasks from textual descriptions for robotic process automation. We show that, without extensive configuration or prompt engineering, LLMs perform comparably to or better than existing solutions and discuss implications for future BPM research as well as practical usage.",cce17289765132b6192ccf90123bb7f5ef920c8e,Semantic Scholar,,, large language models are biased to overestimate profoundness,"['Eugenio Herrera-Berg', 'Tomás Vergara Browne', ""Pablo Le'on-Villagr'a"", 'Marc-Lluís Vives', 'Cristian Buc Calderon']",https://aclanthology.org/2023.emnlp-main.599.pdf,2023-10-22,,"Recent advancements in natural language processing by large language models (LLMs), such as GPT-4, have been suggested to approach Artificial General Intelligence. And yet, it is still under dispute whether LLMs possess similar reasoning abilities to humans. This study evaluates GPT-4 and various other LLMs in judging the profoundness of mundane, motivational, and pseudo-profound statements. We found a significant statement-to-statement correlation between the LLMs and humans, irrespective of the type of statements and the prompting technique used. However, LLMs systematically overestimate the profoundness of nonsensical statements, with the exception of Tk-instruct, which uniquely underestimates the profoundness of statements. Only few-shot learning prompts, as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans. Furthermore, this work provides insights into the potential biases induced by Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the bias to overestimate the profoundness of statements.",d0ffb09a00b67365efb9e217c3fd45d804733810,Semantic Scholar,,, democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts,"['Xuan-Phi Nguyen', 'Sharifah Mahani Aljunied', 'Shafiq R. Joty', 'Lidong Bing']",http://arxiv.org/pdf/2306.11372,2023-06-20,,"Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars. However, in low-resource languages, obtaining such hand-picked exemplars can still be challenging, where unsupervised techniques may be necessary. Moreover, competent generative capabilities of LLMs are observed only in high-resource languages, while their performances among under-represented languages fall behind due to pre-training data imbalance. To elicit LLMs' ability onto low-resource languages without any supervised data, we propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English. These prompts are then used to create intra-lingual exemplars to perform tasks in the target languages. Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages. We also show that fine-tuning a 7B model on data generated from our method helps it perform competitively with a 175B model. In non-English translation tasks, our method even outperforms supervised prompting by up to 3 chrF++ in many low-resource languages. When evaluated on zero-shot multilingual summarization, our method surpasses other English-pivoting baselines by up to 4 ROUGE-L and is also favored by GPT-4.",e0867e9f3a715851a90d17423f7f3b33a2a66bb1,Semantic Scholar,,, exploiting asymmetry for synthetic training data generation synthie and the case of information extraction,"['Martin Josifoski', 'Marija Sakota', 'Maxime Peyrard', 'Robert West']",http://arxiv.org/pdf/2303.04132,2023-03-07,,"Large language models (LLMs) have great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by LLMs: for problems with structured outputs, it is possible to prompt an LLM to perform the task in the reverse direction, by generating plausible input text for a target output structure. Leveraging this asymmetry in task difficulty makes it possible to produce large-scale, high-quality data for complex tasks. We demonstrate the effectiveness of this approach on closed information extraction, where collecting ground-truth data is challenging, and no satisfactory dataset exists to date. We synthetically generate a dataset of 1.8M data points, establish its superior quality compared to existing datasets in a human evaluation, and use it to finetune small models (220M and 770M parameters), termed SynthIE, that outperform the prior state of the art (with equal model size) by a substantial margin of 57 absolute points in micro-F1 and 79 points in macro-F1. Code, data, and models are available at https://github.com/epfl-dlab/SynthIE.",f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f,Semantic Scholar,,, query rewriting for retrievalaugmented large language models,"['Xinbei Ma', 'Yeyun Gong', 'Pengcheng He', 'Hai Zhao', 'Nan Duan']",http://arxiv.org/pdf/2305.14283,2023-05-23,,"Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs from the perspective of the query rewriting. Unlike prior studies focusing on adapting either the retriever or the reader, our approach pays attention to the adaptation of the search query itself, for there is inevitably a gap between the input text and the needed knowledge in retrieval. We first prompt an LLM to generate the query, then use a web search engine to retrieve contexts. Furthermore, to better align the query to the frozen modules, we propose a trainable scheme for our pipeline. A small language model is adopted as a trainable rewriter to cater to the black-box LLM reader. The rewriter is trained using the feedback of the LLM reader by reinforcement learning. Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice QA. Experiments results show consistent performance improvement, indicating that our framework is proven effective and scalable, and brings a new framework for retrieval-augmented LLM.",f743287be3ced6757de7ecb26d03815b22cd737b,Semantic Scholar,,, legoprover neural theorem proving with growing libraries,"['Huajian Xin', 'Haiming Wang', 'Chuanyang Zheng', 'Lin Li', 'Zhengying Liu', 'Qingxing Cao', 'Yinya Huang', 'Jing Xiong', 'Han Shi', 'Enze Xie', 'Jian Yin', 'Zhenguo Li', 'Xiaodan Liang', 'Heng Liao']",https://arxiv.org/pdf/2310.00656,2023-10-01,,"Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems. One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process. However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results. In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving. By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process. These skills are further evolved (by prompting an LLM) to enrich the library on another scale. Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems. Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%). During the proving process, LEGO-Prover also manages to generate over 20,000 skills (theorems/lemmas) and adds them to the growing library. Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We also release our code and all the generated skills.",f8b5ee53c3410f20049e7def47bd52403fa388e3,Semantic Scholar,,, q2d turning questions into dialogs to teach models how to search,"['Yonatan Bitton', 'Shlomi Cohen-Ganor', 'Ido Hakimi', 'Yoad Lewenberg', 'Roee Aharoni', 'Enav Weinreb']",http://arxiv.org/pdf/2304.14318,2023-04-27,,"One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose q2d: an automatic data generation pipeline that generates information-seeking dialogs from questions. We prompt a large language model (PaLM) to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate query-based grounded dialogs with better control and scale. Our experiments demonstrate that: (1) For query generation on the QReCC dataset, models trained on our synthetically-generated data achieve 90%--97% of the performance of models trained on the human-generated data; (2) We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from human-written dialogs.",33729913908d187dc0db6e41073c35643324fe4f,Semantic Scholar,,, fairnessguided fewshot prompting for large language models,"['Huan Ma', 'Changqing Zhang', 'Yatao Bian', 'Lemao Liu', 'Zhirui Zhang', 'P. Zhao', 'Shu Zhang', 'H. Fu', 'Qinghua Hu', 'Bing Wu']",http://arxiv.org/pdf/2303.13217,2023-03-23,,"Large language models have demonstrated surprising ability to perform in-context learning, i.e., these models can be directly applied to solve numerous downstream tasks by conditioning on a prompt constructed by a few input-output examples. However, prior research has shown that in-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats. Therefore, the construction of an appropriate prompt is essential for improving the performance of in-context learning. In this paper, we revisit this problem from the view of predictive bias. Specifically, we introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes. Then we empirically show that prompts with higher bias always lead to unsatisfactory predictive quality. Based on this observation, we propose a novel search strategy based on the greedy search to identify the near-optimal prompt for improving the performance of in-context learning. We perform comprehensive experiments with state-of-the-art mainstream models such as GPT-3 on various downstream tasks. Our results indicate that our method can enhance the model's in-context learning performance in an effective and interpretable manner.",3436ff7a1dd4c6547ba78968d3eec2545a6dccb9,Semantic Scholar,,, prompting multilingual large language models to generate codemixed texts the case of south east asian languages,"['Zheng-Xin Yong', 'Ruochen Zhang', 'J. Forde', 'Skyler Wang', 'Arjun Subramonian', 'Samuel Cahyawijaya', 'Holy Lovenia', 'Genta Indra Winata', 'Lintang Sutawika', 'Jan Christian Blaise Cruz', 'Long Phan', 'Yinghua Tan', 'Alham Fikri Aji']",https://arxiv.org/pdf/2303.13592,2023-03-23,,"The differences in decision making between behavioural models of voice interfaces are hard to capture using existing measures for the absolute performance of such models. For instance, two models may have a similar task success rate, but very different ways of getting there. In this paper, we propose a general methodology to compute the similarity of two dialogue behaviour models and investigate different ways of computing scores on both the semantic and the textual level. Complementing absolute measures of performance, we test our scores on three different tasks and show the practical usability of the measures.",3b27092740a489a63589cdcf40fad6a0e093daa0,Semantic Scholar,,, social simulacra creating populated prototypes for social computing systems,"['J. Park', 'Lindsay Popowski', 'Carrie J. Cai', 'M. Morris', 'Percy Liang', 'Michael S. Bernstein']",https://dl.acm.org/doi/pdf/10.1145/3526113.3545616,2022-08-08,,"Social computing prototypes probe the social behaviors that may arise in an envisioned system design. This prototyping practice is currently limited to recruiting small groups of people. Unfortunately, many challenges do not arise until a system is populated at a larger scale. Can a designer understand how a social system might behave when populated, and make adjustments to the design before the system falls prey to such challenges? We introduce social simulacra, a prototyping technique that generates a breadth of realistic social interactions that may emerge when a social computing system is populated. Social simulacra take as input the designer’s description of a community’s design—goal, rules, and member personas—and produce as output an instance of that design with simulated behavior, including posts, replies, and anti-social behaviors. We demonstrate that social simulacra shift the behaviors that they generate appropriately in response to design changes, and that they enable exploration of “what if?” scenarios where community members or moderators intervene. To power social simulacra, we contribute techniques for prompting a large language model to generate thousands of distinct community members and their social interactions with each other; these techniques are enabled by the observation that large language models’ training data already includes a wide variety of positive and negative behavior on social media platforms. In evaluations, we show that participants are often unable to distinguish social simulacra from actual community behavior and that social computing designers successfully refine their social computing designs when using social simulacra.",49b499598a8864eee55ab264fc16a5bf8d2f87ef,Semantic Scholar,,, folio natural language reasoning with firstorder logic,"['Simeng Han', 'Hailey Schoelkopf', 'Yilun Zhao', 'Zhenting Qi', 'Martin Riddell', 'Luke Benson', 'Lucy Sun', 'E. Zubova', 'Yujie Qiao', 'Matthew Burtell', 'David Peng', 'Jonathan Fan', 'Yixin Liu', 'Brian Wong', 'Malcolm Sailor', 'Ansong Ni', 'Linyong Nan', 'Jungo Kasai', 'Tao Yu', 'Rui Zhang', 'Shafiq R. Joty', 'Alexander R. Fabbri', 'Wojciech Kryscinski', 'Xi Victoria Lin', 'Caiming Xiong', 'Dragomir R. Radev']",http://arxiv.org/pdf/2209.00840,2022-09-02,,"We present FOLIO, a human-annotated, open-domain, and logically complex and diverse dataset for reasoning in natural language (NL), equipped with first order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique conclusions), each paired with one of 487 sets of premises which serve as rules to be used to deductively reason for the validity of each conclusion. The logical correctness of premises and conclusions is ensured by their parallel FOL annotations, which are automatically verified by our FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically constitute a new NL-FOL translation dataset using FOL as the logical form. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that one of the most capable Large Language Model (LLM) publicly available, GPT-3 davinci, achieves only slightly better than random results with few-shot prompting on a subset of FOLIO, and the model is especially bad at predicting the correct truth values for False and Unknown conclusions. Our dataset and code are available at https://github.com/Yale-LILY/FOLIO.",5581bf85386737bd3378eec68189759a05280bea,Semantic Scholar,,, dictionarybased phraselevel prompting of large language models for machine translation,"['Marjan Ghazvininejad', 'Hila Gonen', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2302.07856,2023-02-15,,"Large language models (LLMs) demonstrate remarkable machine translation (MT) abilities via prompting, even though they were not explicitly trained for this task. However, even given the incredible quantities of data they are trained on, LLMs can struggle to translate inputs with rare words, which are common in low resource or domain transfer scenarios. We show that LLM prompting can provide an effective solution for rare words as well, by using prior knowledge from bilingual dictionaries to provide control hints in the prompts. We propose a novel method, DiPMT, that provides a set of possible translations for a subset of the input words, thereby enabling fine-grained phrase-level prompted control of the LLM. Extensive experiments show that DiPMT outperforms the baseline both in low-resource MT, as well as for out-of-domain MT. We further provide a qualitative analysis of the benefits and limitations of this approach, including the overall level of controllability that is achieved.",64ce6ef1f5cf227bf2bf917c87273386ae16256f,Semantic Scholar,,, instructeval systematic evaluation of instruction selection methods,"['Anirudh Ajith', 'Chris Pan', 'Mengzhou Xia', 'A. Deshpande', 'Karthik Narasimhan']",https://arxiv.org/pdf/2307.00259,2023-07-01,,"In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite for benchmarking instruction selection approaches and enabling more generalizable methods in this space.",6af986a2cab884fbd30ad6da2928dc19c12d83a7,Semantic Scholar,,, analyzing chainofthought prompting in large language models via gradientbased feature attributions,"['Skyler Wu', 'Eric Meng Shen', 'Charumathi Badrinath', 'Jiaqi Ma', 'Himabindu Lakkaraju']",https://arxiv.org/pdf/2307.13339,2023-07-25,,"Chain-of-thought (CoT) prompting has been shown to empirically improve the accuracy of large language models (LLMs) on various question answering tasks. While understanding why CoT prompting is effective is crucial to ensuring that this phenomenon is a consequence of desired model behavior, little work has addressed this; nonetheless, such an understanding is a critical prerequisite for responsible model deployment. We address this question by leveraging gradient-based feature attribution methods which produce saliency scores that capture the influence of input tokens on model output. Specifically, we probe several open-source LLMs to investigate whether CoT prompting affects the relative importances they assign to particular input tokens. Our results indicate that while CoT prompting does not increase the magnitude of saliency scores attributed to semantically relevant tokens in the prompt compared to standard few-shot prompting, it increases the robustness of saliency scores to question perturbations and variations in model output.",71d68782c3da41b77866c2fd0cb65726f60b3af1,Semantic Scholar,,, multimodal classifiers for openvocabulary object detection,"['Prannay Kaul', 'Weidi Xie', 'Andrew Zisserman']",http://arxiv.org/pdf/2306.05493,2023-06-08,,"The goal of this paper is open-vocabulary object detection (OVOD) $\unicode{x2013}$ building a model that can detect objects beyond the set of categories seen at training, thus enabling the user to specify categories of interest at inference without the need for model retraining. We adopt a standard two-stage object detector architecture, and explore three ways for specifying novel categories: via language descriptions, via image exemplars, or via a combination of the two. We make three contributions: first, we prompt a large language model (LLM) to generate informative language descriptions for object classes, and construct powerful text-based classifiers; second, we employ a visual aggregator on image exemplars that can ingest any number of images as input, forming vision-based classifiers; and third, we provide a simple method to fuse information from language descriptions and image exemplars, yielding a multi-modal classifier. When evaluating on the challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our text-based classifiers outperform all previous OVOD works; (ii) our vision-based classifiers perform as well as text-based classifiers in prior work; (iii) using multi-modal classifiers perform better than either modality alone; and finally, (iv) our text-based and multi-modal classifiers yield better performance than a fully-supervised detector.",73397ec77081b46f5e49a4e7486129fe2ffe7adf,Semantic Scholar,,, prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages,"['Samuel Rhys Cox', 'Ashraf Abdul', 'Wei Tsang Ooi']",https://arxiv.org/pdf/2308.13479,2023-08-25,,"Large language models (LLMs) are increasingly capable and prevalent, and can be used to produce creative content. The quality of content is influenced by the prompt used, with more specific prompts that incorporate examples generally producing better results. On from this, it could be seen that using instructions written for crowdsourcing tasks (that are specific and include examples to guide workers) could prove effective LLM prompts. To explore this, we used a previous crowdsourcing pipeline that gave examples to people to help them generate a collectively diverse corpus of motivational messages. We then used this same pipeline to generate messages using GPT-4, and compared the collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages than the two baseline prompts. We also discuss implications from messages generated by both human writers and LLMs.",8da6e4537122af618c36563caef5863f8728d789,Semantic Scholar,,, promptbased montecarlo tree search for goaloriented dialogue policy planning,"['Xiao Yu', 'Maximillian Chen', 'Zhou Yu']",http://arxiv.org/pdf/2305.13660,2023-05-23,,"Planning for goal-oriented dialogue often requires simulating future dialogue interactions and estimating task progress. Many approaches thus consider training neural networks to perform look-ahead search algorithms such as A* search and Monte Carlo Tree Search (MCTS). However, this training often requires abundant annotated data, which creates challenges when faced with noisy annotations or low-resource settings. We introduce GDP-Zero, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training. GDP-Zero prompts a large language model to act as a policy prior, value function, user simulator, and system model during the tree search. We evaluate GDP-Zero on the goal-oriented task PersuasionForGood, and find that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive than ChatGPT during interactive evaluations.",9573e2025440219a1d3393664b3c80bda51ac8f4,Semantic Scholar,,, studenteval a benchmark of studentwritten prompts for large language models of code,"['Hannah McLean Babe', 'S. Nguyen', 'Yangtian Zi', 'Arjun Guha', 'Molly Q. Feldman', 'Carolyn Jane Anderson']",http://arxiv.org/pdf/2306.04556,2023-06-07,,"Code LLMs are being rapidly deployed and there is evidence that they can make professional programmers more productive. Current benchmarks for code generation measure whether models generate correct programs given an expert prompt. In this paper, we present a new benchmark containing multiple prompts per problem, written by a specific population of non-expert prompters: beginning programmers. StudentEval contains 1,749 prompts for 48 problems, written by 80 students who have only completed one semester of Python programming. Our students wrote these prompts while working interactively with a Code LLM, and we observed very mixed success rates. We use StudentEval to evaluate 5 Code LLMs and find that StudentEval is a better discriminator of model performance than existing benchmarks. We analyze the prompts and find significant variation in students' prompting techniques. We also find that nondeterministic LLM sampling could mislead students into thinking that their prompts are more (or less) effective than they actually are, which has implications for how to teach with Code LLMs.",a4929de687f3c6937dabbf733258af635781d3c4,Semantic Scholar,,, generate rather than retrieve large language models are strong context generators,"['W. Yu', 'Dan Iter', 'Shuohang Wang', 'Yichong Xu', 'Mingxuan Ju', 'Soumya Sanyal', 'Chenguang Zhu', 'Michael Zeng', 'Meng Jiang']",http://arxiv.org/pdf/2209.10063,2022-09-21,,"Knowledge-intensive tasks, such as open-domain question answering (QA), require access to a large amount of world or domain knowledge. A common approach for knowledge-intensive tasks is to employ a retrieve-then-read pipeline that first retrieves a handful of relevant contextual documents from an external corpus such as Wikipedia and then predicts an answer conditioned on the retrieved documents. In this paper, we present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators. We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer. Furthermore, we propose a novel clustering-based prompting method that selects distinct prompts, resulting in the generated documents that cover different perspectives, leading to better recall over acceptable answers. We conduct extensive experiments on three different knowledge-intensive tasks, including open-domain QA, fact checking, and dialogue system. Notably, GenRead achieves 71.6 and 54.4 exact match scores on TriviaQA and WebQ, significantly outperforming the state-of-the-art retrieve-then-read pipeline DPR-FiD by +4.0 and +3.9, without retrieving any documents from any external knowledge source. Lastly, we demonstrate the model performance can be further improved by combining retrieval and generation. Our code and generated documents can be found at https://github.com/wyu97/GenRead.",b2542a738b75ee9b7ce1a13d8b78f9095d212412,Semantic Scholar,,, idas intent discovery with abstractive summarization,"['Maarten De Raedt', 'Fréderic Godin', 'Thomas Demeester', 'Chris Develder']",http://arxiv.org/pdf/2305.19783,2023-05-31,,"Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents. We show that recent competitive methods in intent discovery can be outperformed by clustering utterances based on abstractive summaries, i.e., “labels”, that retain the core elements while removing non-essential information. We contribute the IDAS approach, which collects a set of descriptive utterance labels by prompting a Large Language Model, starting from a well-chosen seed set of prototypical utterances, to bootstrap an In-Context Learning procedure to generate labels for non-prototypical utterances. The utterances and their resulting noisy labels are then encoded by a frozen pre-trained encoder, and subsequently clustered to recover the latent intents. For the unsupervised task (without any intent labels) IDAS outperforms the state-of-the-art by up to +7.42% in standard cluster metrics for the Banking, StackOverflow, and Transport datasets. For the semi-supervised task (with labels for a subset of intents) IDAS surpasses 2 recent methods on the CLINC benchmark without even using labeled data.",b9c263500281e05fddfe1f84839491f605815230,Semantic Scholar,,, reward design with language models,"['Minae Kwon', 'Sang Michael Xie', 'Kalesha Bullard', 'Dorsa Sadigh']",http://arxiv.org/pdf/2303.00001,2023-02-27,,"Reward design in reinforcement learning (RL) is challenging since specifying human notions of desired behavior may be difficult via reward functions or require many expert demonstrations. Can we instead cheaply design rewards using a natural language interface? This paper explores how to simplify reward design by prompting a large language model (LLM) such as GPT-3 as a proxy reward function, where the user provides a textual prompt containing a few examples (few-shot) or a description (zero-shot) of the desired behavior. Our approach leverages this proxy reward function in an RL framework. Specifically, users specify a prompt once at the beginning of training. During training, the LLM evaluates an RL agent's behavior against the desired behavior described by the prompt and outputs a corresponding reward signal. The RL agent then uses this reward to update its behavior. We evaluate whether our approach can train agents aligned with user objectives in the Ultimatum Game, matrix games, and the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents trained with our framework are well-aligned with the user's objectives and outperform RL agents trained with reward functions learned via supervised learning",d318e0169f649656c71f02a1f84194a734fe1962,Semantic Scholar,,, leveraging training data in fewshot prompting for numerical reasoning,"['Zhanming Jie', 'Wei Lu']",http://arxiv.org/pdf/2305.18170,2023-05-29,,"Chain-of-thought (CoT) prompting with large language models has proven effective in numerous natural language processing tasks, but designing prompts that generalize well to diverse problem types can be challenging, especially in the context of math word problem (MWP) solving. Additionally, it is common to have a large amount of training data that have a better diversity coverage but CoT annotations are not available, which limits the use of supervised learning techniques. To address these issues, we investigate two approaches to leverage the training data in a few-shot prompting scenario: dynamic program prompting and program distillation. Our approach is largely inspired by Gao et al., (2022), where they proposed to replace the CoT with the programs as the intermediate reasoning step. Such a prompting strategy allows us to accurately verify the answer correctness through program execution in MWP solving. Our dynamic program prompting involves annotating the training data by sampling correct programs from a large language model, while program distillation involves adapting a smaller model to the program-annotated training data. Our experiments on three standard MWP datasets demonstrate the effectiveness of these approaches, yielding significant improvements over previous baselines for prompting and fine-tuning. Our results suggest that leveraging a large amount of training data can improve the generalization ability of prompts and boost the performance of fine-tuned small models in MWP solving.",d75d11d2c89c01cd284383546ae057cb827dc272,Semantic Scholar,,, spell semantic prompt evolution based on a llm,"['Yujian Betterest Li', 'Kai Wu']",https://arxiv.org/pdf/2310.01260,2023-10-02,,"Prompt engineering is a new paradigm for enhancing the performance of trained neural network models. For optimizing text-style prompts, existing methods usually individually operate small portions of a text step by step, which either breaks the fluency or could not globally adjust a prompt. Since large language models (LLMs) have powerful ability of generating coherent texts token by token, can we utilize LLMs for improving prompts? Based on this motivation, in this paper, considering a trained LLM as a text generator, we attempt to design a black-box evolution algorithm for automatically optimizing texts, namely SPELL (Semantic Prompt Evolution based on a LLM). The proposed method is evaluated with different LLMs and evolution parameters in different text tasks. Experimental results show that SPELL could rapidly improve the prompts indeed. We further explore the evolution process and discuss on the limitations, potential possibilities and future work.",e1dafedfbb55cd2200411841c2ec40e7ea827773,Semantic Scholar,,, contrastive noveltyaugmented learning anticipating outliers with large language models,"['Albert Xu', 'Xiang Ren', 'Robin Jia']",https://aclanthology.org/2023.acl-long.658.pdf,2022-11-28,,"In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on unseen classes. To remedy this overconfidence, we introduce Contrastive Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate relevant novel classes, then generate examples from each novel class matching the task format. Second, we train a classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CoNAL, classifiers improve in their ability to detect and abstain on novel class examples over prior methods by an average of 2.3% in terms of accuracy under the accuracy-coverage curve (AUAC) and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy.",fed7e4a0e8c798777f3f1613be62a2dfb776b462,Semantic Scholar,,, from prompt injections to sql injection attacks how protected is your llmintegrated web application,"['Rodrigo Pedro', 'Daniel Castro', 'Paulo Carreira', 'Nuno Santos']",https://arxiv.org/pdf/2308.01990,2023-08-03,,"Large Language Models (LLMs) have found widespread applications in various domains, including web applications, where they facilitate human interaction via chatbots with natural language interfaces. Internally, aided by an LLM-integration middleware such as Langchain, user prompts are translated into SQL queries used by the LLM to provide meaningful responses to users. However, unsanitized user prompts can lead to SQL injection attacks, potentially compromising the security of the database. Despite the growing interest in prompt injection vulnerabilities targeting LLMs, the specific risks of generating SQL injection attacks through prompt injections have not been extensively studied. In this paper, we present a comprehensive examination of prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the Langchain framework. Using Langchain as our case study, we characterize P$_2$SQL injections, exploring their variants and impact on application security through multiple concrete examples. Furthermore, we evaluate 7 state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks across language models. Our findings indicate that LLM-integrated applications based on Langchain are highly susceptible to P$_2$SQL injection attacks, warranting the adoption of robust defenses. To counter these attacks, we propose four effective defense techniques that can be integrated as extensions to the Langchain framework. We validate the defenses through an experimental evaluation with a real-world use case application.",0894585294c67193ff3190240554677b56fd79a0,Semantic Scholar,,, prompt injection parameterization of fixed inputs,"['Eunbi Choi', 'Yongrae Jo', 'Joel Jang', 'Minjoon Seo']",http://arxiv.org/pdf/2206.11349,2022-05-31,,"Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.",1c475acaa1060c8318a625f24bfd88c12f367516,Semantic Scholar,,, safeguarding crowdsourcing surveys from chatgpt with prompt injection,"['Chaofan Wang', 'Samuel Kernan Freire', 'Mo Zhang', 'Jing Wei', 'Jorge Gonçalves', 'V. Kostakos', 'Zhanna Sarsenbayeva', 'Christina Schneegass', 'A. Bozzon', 'E. Niforatos']",http://arxiv.org/pdf/2306.08833,2023-06-15,,"ChatGPT and other large language models (LLMs) have proven useful in crowdsourcing tasks, where they can effectively annotate machine learning training data. However, this means that they also have the potential for misuse, specifically to automatically answer surveys. LLMs can potentially circumvent quality assurance measures, thereby threatening the integrity of methodologies that rely on crowdsourcing surveys. In this paper, we propose a mechanism to detect LLM-generated responses to surveys. The mechanism uses""prompt injection"", such as directions that can mislead LLMs into giving predictable responses. We evaluate our technique against a range of question scenarios, types, and positions, and find that it can reliably detect LLM-generated responses with more than 93% effectiveness. We also provide an open-source software to help survey designers use our technique to detect LLM responses. Our work is a step in ensuring that survey methodologies remain rigorous vis-a-vis LLMs.",8c035150f883007b5af9e5bb753b78d9c0b75a55,Semantic Scholar,,, demystifying rce vulnerabilities in llmintegrated apps,"['Tong Liu', 'Zizhuang Deng', 'Guozhu Meng', 'Yuekang Li', 'Kai Chen']",https://arxiv.org/pdf/2309.02926,2023-09-06,,"In recent years, Large Language Models (LLMs) have demonstrated remarkable potential across various downstream tasks. LLM-integrated frameworks, which serve as the essential infrastructure, have given rise to many LLM-integrated web apps. However, some of these frameworks suffer from Remote Code Execution (RCE) vulnerabilities, allowing attackers to execute arbitrary code on apps' servers remotely via prompt injections. Despite the severity of these vulnerabilities, no existing work has been conducted for a systematic investigation of them. This leaves a great challenge on how to detect vulnerabilities in frameworks as well as LLM-integrated apps in real-world scenarios. To fill this gap, we present two novel strategies, including 1) a static analysis-based tool called LLMSmith to scan the source code of the framework to detect potential RCE vulnerabilities and 2) a prompt-based automated testing approach to verify the vulnerability in LLM-integrated web apps. We discovered 13 vulnerabilities in 6 frameworks, including 12 RCE vulnerabilities and 1 arbitrary file read/write vulnerability. 11 of them are confirmed by the framework developers, resulting in the assignment of 7 CVE IDs. After testing 51 apps, we found vulnerabilities in 17 apps, 16 of which are vulnerable to RCE and 1 to SQL injection. We responsibly reported all 17 issues to the corresponding developers and received acknowledgments. Furthermore, we amplify the attack impact beyond achieving RCE by allowing attackers to exploit other app users (e.g. app responses hijacking, user API key leakage) without direct interaction between the attacker and the victim. Lastly, we propose some mitigating strategies for improving the security awareness of both framework and app developers, helping them to mitigate these risks effectively.",9be0dea0d6b892a2162490fb02712efaf10c0c87,Semantic Scholar,,, prompt injection attack against llmintegrated applications,"['Yi Liu', 'Gelei Deng', 'Yuekang Li', 'Kailong Wang', 'Tianwei Zhang', 'Yepang Liu', 'Haoyu Wang', 'Yanhong Zheng', 'Yang Liu']",http://arxiv.org/pdf/2306.05499,2023-06-08,,"Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.",db4cf9f6a653d5c15973e836c800ea47743251ae,Semantic Scholar,,, rlprompt optimizing discrete text prompts with reinforcement learning,"['Mingkai Deng', 'Jianyu Wang', 'Cheng-Ping Hsieh', 'Yihan Wang', 'Han Guo', 'Tianmin Shu', 'Meng Song', 'E. Xing', 'Zhiting Hu']",http://arxiv.org/pdf/2205.12548,2022-05-25,,"Prompting has shown impressive success in enabling large pre-trained language models (LMs) to perform diverse NLP tasks, especially with only few downstream data. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning *soft* prompts (e.g., embeddings) which fall short of interpretability, reusability across LMs, and applicability when gradients are not accessible. *Discrete* prompts, on the other hand, are difficult to optimize, and are often created by “enumeration (e.g., paraphrasing)-then-selection” heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the optimized discrete prompt after training with reward. To harness the complex and stochastic reward signals from the large LM environment, we incorporate effective reward stabilization that substantially enhances training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing fine-tuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating that LM prompting may not follow human language patterns.",07759a84f27e43cfa5bc8d579f8227c96e6ae1dc,Semantic Scholar,,, temporallyextended prompts optimization for sam in interactive medical image segmentation,"['Chuyun Shen', 'Wenhao Li', 'Ya Zhang', 'Xiangfeng Wang']",https://arxiv.org/pdf/2306.08958,2023-06-15,,"The Segmentation Anything Model (SAM) has recently emerged as a foundation model for addressing image segmentation. Owing to the intrinsic complexity of medical images and the high annotation cost, the medical image segmentation (MIS) community has been encouraged to investigate SAM’s zero-shot capabilities to facilitate automatic annotation. Inspired by the extraordinary accomplishments of the interactive medical image segmentation (IMIS) paradigm, this paper focuses on assessing the potential of SAM’s zero-shot capabilities within the IMIS paradigm to amplify its benefits in the MIS domain. Regrettably, we observe that SAM’s vulnerability to prompt forms (e.g., points, bounding boxes) becomes notably pronounced in IMIS. This leads us to develop a mechanism that adaptively offers suitable prompt forms for human experts. We refer to the mechanism above as temporally-extended prompts optimization (TEPO) and model it as a Markov decision process, solvable through reinforcement learning. Numerical experiments on the standardized benchmark Brats2020 demonstrate that the learned TEPO agent can further enhance SAM’s zero-shot capability in the MIS context.",0da5adf32fe7501a5b98eb6549b2c42af08ee094,Semantic Scholar,,, topological data analysis guided segment anything model prompt optimization for zeroshot segmentation in biological imaging,"['R. Glatt', 'Shusen Liu']",http://arxiv.org/pdf/2306.17400,2023-06-30,,"Emerging foundation models in machine learning are models trained on vast amounts of data that have been shown to generalize well to new tasks. Often these models can be prompted with multi-modal inputs that range from natural language descriptions over images to point clouds. In this paper, we propose topological data analysis (TDA) guided prompt optimization for the Segment Anything Model (SAM) and show preliminary results in the biological image segmentation domain. Our approach replaces the standard grid search approach that is used in the original implementation and finds point locations based on their topological significance. Our results show that the TDA optimized point cloud is much better suited for finding small objects and massively reduces computational complexity despite the extra step in scenarios which require many segmentations.",294b4613b21abf1e9ba499de274569360093b107,Semantic Scholar,,, unveiling the potential of knowledgeprompted chatgpt for enhancing drug trafficking detection on social media,"['Chuanbo Hu', 'Bing Liu', 'Xin Li', 'Yanfang Ye']",https://arxiv.org/pdf/2307.03699,2023-07-07,,"Social media platforms such as Instagram and Twitter have emerged as critical channels for drug marketing and illegal sale. Detecting and labeling online illicit drug trafficking activities becomes important in addressing this issue. However, the effectiveness of conventional supervised learning methods in detecting drug trafficking heavily relies on having access to substantial amounts of labeled data, while data annotation is time-consuming and resource-intensive. Furthermore, these models often face challenges in accurately identifying trafficking activities when drug dealers use deceptive language and euphemisms to avoid detection. To overcome this limitation, we conduct the first systematic study on leveraging large language models (LLMs), such as ChatGPT, to detect illicit drug trafficking activities on social media. We propose an analytical framework to compose \emph{knowledge-informed prompts}, which serve as the interface that humans can interact with and use LLMs to perform the detection task. Additionally, we design a Monte Carlo dropout based prompt optimization method to further to improve performance and interpretability. Our experimental findings demonstrate that the proposed framework outperforms other baseline language models in terms of drug trafficking detection accuracy, showing a remarkable improvement of nearly 12\%. By integrating prior knowledge and the proposed prompts, ChatGPT can effectively identify and label drug trafficking activities on social networks, even in the presence of deceptive language and euphemisms used by drug dealers to evade detection. The implications of our research extend to social networks, emphasizing the importance of incorporating prior knowledge and scenario-based prompts into analytical tools to improve online security and public safety.",2e588fe7e07948cb9112c37d5e9dcc3a13b1bd0f,Semantic Scholar,,, robust prompt optimization for large language models against distribution shifts,"['Moxin Li', 'Wenjie Wang', 'Fuli Feng', 'Yixin Cao', 'Jizhi Zhang', 'Tat-seng Chua']",https://aclanthology.org/2023.emnlp-main.95.pdf,2023-05-23,,"Large Language Model (LLM) has demonstrated significant ability in various Natural Language Processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on automatic prompt optimization using labeled task data. We reveal that these prompt optimization techniques are vulnerable to distribution shifts such as subpopulation shifts, which are common for LLMs in real-world scenarios such as customer reviews analysis. In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group. To solve this problem, we propose Generalized Prompt Optimization framework, which incorporates the unlabeled data from the target group into prompt optimization. Extensive experimental results demonstrate the effectiveness of the proposed framework with significant performance improvement on the target group and comparable performance on the source group.",3b0c49ca5ac0f441c302c9ca4def4804253552d5,Semantic Scholar,,, incontext examples selection for machine translation,"['Sweta Agrawal', 'Chunting Zhou', 'M. Lewis', 'Luke Zettlemoyer', 'Marjan Ghazvininejad']",https://arxiv.org/pdf/2212.02437,2022-12-05,,"Large-scale generative models show an impressive ability to perform a wide range of Natural Language Processing (NLP) tasks using in-context learning, where a few examples are used to describe a task to the model. For Machine Translation (MT), these examples are typically randomly sampled from the development dataset with a similar distribution as the evaluation set. However, it is unclear how the choice of these in-context examples and their ordering impacts the output translation quality. In this work, we aim to understand the properties of good in-context examples for MT in both in-domain and out-of-domain settings. We show that the translation quality and the domain of the in-context examples matter and that 1-shot noisy unrelated example can have a catastrophic impact on output quality. While concatenating multiple random examples reduces the effect of noise, a single good prompt optimized to maximize translation quality on the development dataset can elicit learned information from the pre-trained language model. Adding similar examples based on an n-gram overlap with the test source significantly and consistently improves the translation quality of the outputs, outperforming a strong kNN-MT baseline in 2 out of 4 out-of-domain datasets.",515cf674fcdced5a7d5bb156dd5fcc1f5290e79b,Semantic Scholar,,, getting more out of mixture of language model reasoning experts,"['Chenglei Si', 'Weijia Shi', 'Chen Zhao', 'Luke Zettlemoyer', 'Jordan L. Boyd-Graber']",https://aclanthology.org/2023.findings-emnlp.552.pdf,2023-05-24,,"While recent large language models (LLMs) improve on various question answering (QA) datasets, it remains difficult for a single model to generalize across question types that require distinct reasoning abilities. We provide empirical evidence that state-of-the-art LLMs suffer from poor generalizability on reasoning types beyond those seen in the prompt. To remedy this, we propose a Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diverse specialized language models. We specialize the backbone language model with prompts optimized for different reasoning categories, including factual, multihop, mathematical, and commonsense reasoning. Our key insight is to leverage agreement among the specialized experts to select the best answer for each question, or to abstain from answering. This gives MoRE higher accuracy than any single specialized model on a collection of 12 QA datasets from four reasoning types. Beyond generalizability, the interpretable design of MoRE improves selective question answering results compared to baselines without incorporating inter-expert agreement. This framework is also more interpretable and useful to human consumers of QA outputs. Our human study confirms that presenting expert predictions and the answer selection process helps annotators more accurately calibrate when to trust the system's output. We release all code and data to facilitate future work.",7283d616e40d7ab7422e3697218f3fc42f292bf2,Semantic Scholar,,, autohint automatic prompt optimization with hint generation,"['Hong Sun', 'Xue Li', 'Yi Xu', 'Youkow Homma', 'Qinhao Cao', 'Min-man Wu', 'Jian Jiao', 'Denis Xavier Charles']",https://arxiv.org/pdf/2307.07415,2023-07-13,,"This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization for Large Language Models (LLM). While LLMs have demonstrated remarkable ability in achieving high-quality annotation in various tasks, the key to applying this ability to specific tasks lies in developing high-quality prompts. Thus we propose a framework to inherit the merits of both in-context learning and zero-shot learning by incorporating enriched instructions derived from input-output demonstrations to optimize original prompt. We refer to the enrichment as the hint and propose a framework to automatically generate the hint from labeled data. More concretely, starting from an initial prompt, our method first instructs a LLM to deduce new hints for selected samples from incorrect predictions, and then summarizes from per-sample hints and adds the results back to the initial prompt to form a new, enriched instruction. The proposed method is evaluated on the BIG-Bench Instruction Induction dataset for both zero-shot and few-short prompts, where experiments demonstrate our method is able to significantly boost accuracy for multiple tasks.",838e1317454724a9bb758d05d97e6058e11a8251,Semantic Scholar,,, readonly prompt optimization for visionlanguage fewshot learning,"['Dongjun Lee', 'Seokwon Song', 'Jihee G. Suh', 'Joonmyeong Choi', 'S. Lee', 'Hyunwoo J.Kim']",https://arxiv.org/pdf/2308.14960,2023-08-29,,"In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models to downstream tasks. These methods aim to adapt the pre-trained models by introducing learnable prompts while keeping pretrained weights frozen. However, learnable prompts can affect the internal representation within the self-attention module, which may negatively impact performance variance and generalization, especially in data-deficient settings. To address these issues, we propose a novel approach, Read-only Prompt Optimization (RPO). RPO leverages masked attention to prevent the internal representation shift in the pre-trained model. Further, to facilitate the optimization of RPO, the read-only prompts are initialized based on special tokens of the pre-trained model. Our extensive experiments demonstrate that RPO outperforms CLIP and CoCoOp in base-to-new generalization and domain generalization while displaying better robustness. Also, the proposed method achieves better generalization on extremely data-deficient settings, while improving parameter efficiency and computational overhead. Code is available at https://github.com/mlvlab/RPO.",b0b237dd905f12b23e3fc48ac7139e275158a007,Semantic Scholar,,, "optimizing mobileedge aigenerated everything (aigx) services by prompt engineering fundamental, framework, and case study","['Yinqiu Liu', 'Hongyang Du', 'D. Niyato', 'Jiawen Kang', 'Shuguang Cui', 'Xuemin Shen', 'Ping Zhang']",https://arxiv.org/pdf/2309.01065,2023-09-03,,"As the next-generation paradigm for content creation, AI-Generated Content (AIGC), i.e., generating content automatically by Generative AI (GAI) based on user prompts, has gained great attention and success recently. With the ever-increasing power of GAI, especially the emergence of Pretrained Foundation Models (PFMs) that contain billions of parameters and prompt engineering methods (i.e., finding the best prompts for the given task), the application range of AIGC is rapidly expanding, covering various forms of information for human, systems, and networks, such as network designs, channel coding, and optimization solutions. In this article, we present the concept of mobile-edge AI-Generated Everything (AIGX). Specifically, we first review the building blocks of AIGX, the evolution from AIGC to AIGX, as well as practical AIGX applications. Then, we present a unified mobile-edge AIGX framework, which employs edge devices to provide PFM-empowered AIGX services and optimizes such services via prompt engineering. More importantly, we demonstrate that suboptimal prompts lead to poor generation quality, which adversely affects user satisfaction, edge network performance, and resource utilization. Accordingly, we conduct a case study, showcasing how to train an effective prompt optimizer using ChatGPT and investigating how much improvement is possible with prompt engineering in terms of user experience, quality of generation, and network performance.",b349f3dd5b764168cba57bb4ad3fc240c2b3eddf,Semantic Scholar,,, automatic prompt optimization with gradient descent and beam search,"['Reid Pryzant', 'Dan Iter', 'Jerry Li', 'Y. Lee', 'Chenguang Zhu', 'Michael Zeng']",http://arxiv.org/pdf/2305.03495,2023-05-04,,"Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to this problem, Automatic Prompt Optimization (APO), which is inspired by numerical gradient descent to automatically improve prompts, assuming access to training data and an LLM API. The algorithm uses minibatches of data to form natural language""gradients""that criticize the current prompt. The gradients are then""propagated""into the prompt by editing the prompt in the opposite semantic direction of the gradient. These gradient descent steps are guided by a beam search and bandit selection procedure which significantly improves algorithmic efficiency. Preliminary results across three benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt's performance by up to 31%, by using data to rewrite vague task descriptions into more precise annotation instructions.",c76dd4a70361c3afd2e19d046343e2dedd16ecc3,Semantic Scholar,,, querydependent prompt evaluation and optimization with offline inverse rl,"['Hao Sun', 'Alihan Hüyük', 'M. Schaar']",https://arxiv.org/pdf/2309.06553,2023-09-13,,"In this study, we aim to enhance the arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization. We identify a previously overlooked objective of query dependency in such optimization and elucidate two ensuing challenges that impede the successful and economical design of prompt optimization techniques. One primary issue is the absence of an effective method to evaluate prompts during inference when the golden answer is unavailable. Concurrently, learning via interactions with the LLMs to navigate the expansive natural language prompting space proves to be resource-intensive. To address this, we introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data. Such data exists as by-products when diverse prompts are benchmarked on open-accessible datasets. With Prompt-OIRL, the query-dependent prompt optimization objective is achieved by first learning an offline reward model. This model can evaluate any query-prompt pairs without accessing LLMs. Subsequently, a best-of-N strategy is deployed to recommend the optimal prompt. Our experimental evaluations across various LLM scales and arithmetic reasoning datasets underscore both the efficacy and economic viability of the proposed approach.",cd391facabf5005419b79997b2ef8473644a8192,Semantic Scholar,,, discrete prompt optimization via constrained generation for zeroshot reranker,"['Sukmin Cho', 'Soyeong Jeong', 'J. Seo', 'Jong C. Park']",http://arxiv.org/pdf/2305.13729,2023-05-23,,"Re-rankers, which order retrieved documents with respect to the relevance score on the given query, have gained attention for the information retrieval (IR) task. Rather than fine-tuning the pre-trained language model (PLM), the large-scale language model (LLM) is utilized as a zero-shot re-ranker with excellent results. While LLM is highly dependent on the prompts, the impact and the optimization of the prompts for the zero-shot re-ranker are not explored yet. Along with highlighting the impact of optimization on the zero-shot re-ranker, we propose a novel discrete prompt optimization method, Constrained Prompt generation (Co-Prompt), with the metric estimating the optimum for re-ranking. Co-Prompt guides the generated texts from PLM toward optimal prompts based on the metric without parameter update. The experimental results demonstrate that Co-Prompt leads to outstanding re-ranking performance against the baselines. Also, Co-Prompt generates more interpretable prompts for humans against other prompt optimization methods.",d61f0820943a667917fb6d32225826aa5279f694,Semantic Scholar,,, emotionconditioned text generation through automatic prompt optimization,"['Yarik Menchaca Resendiz', 'Roman Klinger']",https://arxiv.org/pdf/2308.04857,2023-08-09,,"Conditional natural language generation methods often require either expensive fine-tuning or training a large language model from scratch. Both are unlikely to lead to good results without a substantial amount of data and computational resources. Prompt learning without changing the parameters of a large language model presents a promising alternative. It is a cost-effective approach, while still achieving competitive results. While this procedure is now established for zero- and few-shot text classification and structured prediction, it has received limited attention in conditional text generation. We present the first automatic prompt optimization approach for emotion-conditioned text generation with instruction-fine-tuned models. Our method uses an iterative optimization procedure that changes the prompt by adding, removing, or replacing tokens. As objective function, we only require a text classifier that measures the realization of the conditional variable in the generated text. We evaluate the method on emotion-conditioned text generation with a focus on event reports and compare it to manually designed prompts that also act as the seed for the optimization procedure. The optimized prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in contrast to manually designed seed prompts with only 0.22 macro-average F1.",ef5cd0eb266e3df3eb64aec18e1854fe0244d228,Semantic Scholar,,, large language models as optimizers,"['Chengrun Yang', 'Xuezhi Wang', 'Yifeng Lu', 'Hanxiao Liu', 'Quoc V. Le', 'Denny Zhou', 'Xinyun Chen']",https://arxiv.org/pdf/2309.03409,2023-09-07,,"Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.",f8a2dca1e8fe56e698984c077f7ff58d8ca867e9,Semantic Scholar,,, dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning,"['Chengzhengxu Li', 'Xiaoming Liu', 'Yichen Wang', 'Duyi Li', 'Y. Lan', 'Chao Shen']",https://arxiv.org/pdf/2308.07272,,,"Prompt-based pre-trained language models (PLMs) paradigm have succeeded substantially in few-shot natural language processing (NLP) tasks. However, prior discrete prompt optimization methods require expert knowledge to design the base prompt set and identify high-quality prompts, which is costly, inefficient, and subjective. Meanwhile, existing continuous prompt optimization methods improve the performance by learning the ideal prompts through the gradient information of PLMs, whose high computational cost, and low readability and generalizability are often concerning. To address the research gap, we propose a D ialogue-comprised P olicy-gradient-based D iscrete P rompt O ptimization (DP 2 O) method. We first design a multi-round dialogue alignment strategy for readability prompt set generation based on GPT-4. Furthermore, we propose an efficient prompt screening metric to identify high-quality prompts with linear complexity. Finally, we construct a reinforcement learning (RL) framework based on policy gradients to match the prompts to inputs optimally. By training a policy network with only 0.67% of the PLM parameter size on the tasks in the few-shot setting, DP 2 O outperforms the state-of-the-art (SOTA) method by 1.52% in accuracy on average on four open-source datasets. Moreover, subsequent experiments also demonstrate that DP 2 O has good universality, robustness and generalization ability.",ff96527c03fbea7c3bb7d44d1d656d875ddba75e,Semantic Scholar,,, evaluation of chatgpt family of models for biomedical reasoning and classification,"['Shan Chen', 'Yingya Li', 'Sheng Lu', 'Hoang Van', 'H. Aerts', 'G. Savova', 'D. Bitterman']",http://arxiv.org/pdf/2304.02496,2023-04-05,,"Recent advances in large language models (LLMs) have shown impressive ability in biomedical question-answering, but have not been adequately investigated for more specific biomedical applications. This study investigates the performance of LLMs such as the ChatGPT family of models (GPT-3.5s, GPT-4) in biomedical tasks beyond question-answering. Because no patient data can be passed to the OpenAI API public interface, we evaluated model performance with over 10000 samples as proxies for two fundamental tasks in the clinical domain - classification and reasoning. The first task is classifying whether statements of clinical and policy recommendations in scientific literature constitute health advice. The second task is causal relation detection from the biomedical literature. We compared LLMs with simpler models, such as bag-of-words (BoW) with logistic regression, and fine-tuned BioBERT models. Despite the excitement around viral ChatGPT, we found that fine-tuning for two fundamental NLP tasks remained the best strategy. The simple BoW model performed on par with the most complex LLM prompting. Prompt engineering required significant investment.",020e473d8c987dcfb03fcfffeb87b17812447031,Semantic Scholar,,, textguided synthesis of artistic images with retrievalaugmented diffusion models,"['Robin Rombach', 'A. Blattmann', 'B. Ommer']",http://arxiv.org/pdf/2207.13038,2022-07-26,,"Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Of particular note is the field of ``AI-Art'', which has seen unprecedented growth with the emergence of powerful multimodal models such as CLIP. By combining speech and image synthesis models, so-called ``prompt-engineering'' has become established, in which carefully selected and composed sentences are used to achieve a certain visual style in the synthesized image. In this note, we present an alternative approach based on retrieval-augmented diffusion models (RDMs). In RDMs, a set of nearest neighbors is retrieved from an external database during training for each training instance, and the diffusion model is conditioned on these informative samples. During inference (sampling), we replace the retrieval database with a more specialized database that contains, for example, only images of a particular visual style. This provides a novel way to prompt a general trained model after training and thereby specify a particular visual style. As shown by our experiments, this approach is superior to specifying the visual style within the text prompt. We open-source code and model weights at https://github.com/CompVis/latent-diffusion .",0270ec4bc946b59c5cf6204be2553682dee0346c,Semantic Scholar,,, interactive and visual prompt engineering for adhoc task adaptation with large language models,"['Hendrik Strobelt', 'Albert Webson', 'Victor Sanh', 'Benjamin Hoover', 'Johanna Beyer', 'H. Pfister', 'Alexander M. Rush']",https://arxiv.org/pdf/2208.07852,2022-08-16,,"State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",0392d58335ce674a70f5e58ac8c438de296a0e6a,Semantic Scholar,,, "artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering","['Sue Lim', 'Ralf Schmälzle']",http://arxiv.org/pdf/2212.07507,2022-12-14,,"This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Using prompt engineering, we generated messages that could be used to raise awareness and compared them to retweeted human-generated messages via computational and human evaluation methods. The system was easy to use and prolific, and computational analyses revealed that the AI-generated messages were on par with human-generated ones in terms of sentiment, reading ease",040ec58865ab50b5e6d91a355ffc146ec5034e9f,Semantic Scholar,,, how does prompt engineering affect chatgpt performance on unsupervised entity resolution,"['Khanin Sisaengsuwanchai', 'Navapat Nananukul', 'M. Kejriwal']",https://arxiv.org/pdf/2310.06174,2023-10-09,,"Entity Resolution (ER) is the problem of semi-automatically determining when two entities refer to the same underlying entity, with applications ranging from healthcare to e-commerce. Traditional ER solutions required considerable manual expertise, including feature engineering, as well as identification and curation of training data. In many instances, such techniques are highly dependent on the domain. With recent advent in large language models (LLMs), there is an opportunity to make ER much more seamless and domain-independent. However, it is also well known that LLMs can pose risks, and that the quality of their outputs can depend on so-called prompt engineering. Unfortunately, a systematic experimental study on the effects of different prompting methods for addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper aims to address this gap by conducting such a study. Although preliminary in nature, our results show that prompting can significantly affect the quality of ER, although it affects some metrics more than others, and can also be dataset dependent.",06ab0710c8a7315e70c15c0d7eb1aa50210d945c,Semantic Scholar,,, a systematic survey of prompt engineering on visionlanguage foundation models,"['Jindong Gu', 'Zhen Han', 'Shuo Chen', 'Ahmad Beirami', 'Bailan He', 'Gengyuan Zhang', 'Ruotong Liao', 'Yao Qin', 'Volker Tresp', 'Philip H. S. Torr']",https://arxiv.org/pdf/2307.12980,,,"—Prompt engineering is a technique that involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be created manually as natural language instructions or generated automatically as either natural language instructions or vector representations. Prompt engineering enables the ability to perform predictions based solely on prompts without updating model parameters, and the easier application of large pre-trained models in real-world tasks. In past years, Prompt engineering has been well-studied in natural language processing. Recently, it has also been intensively studied in vision-language modeling. However, there is currently a lack of a systematic overview of prompt engineering on pre-trained vision-language models. This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models: multimodal-to-text generation models ( e.g., Flamingo), image-text matching models ( e.g., CLIP), and text-to-image generation models ( e.g., Stable Diffusion). For each type of model, a brief model summary, prompting methods, prompting-based applications, and the corresponding responsibility and integrity issues are summarized and discussed. Furthermore, the commonalities and differences between prompting on vision-language models, language models, and vision models are also discussed. The challenges, future directions, and research opportunities are summarized to foster future research on this topic.",06d8562831c32844285a691c5250d04726df3c61,Semantic Scholar,,, unveiling the potential of large language models in generating semantic and crosslanguage clones,"['Palash R. Roy', 'A. Alam', 'Farouq Al-Omari', 'B. Roy', 'C. Roy', 'Kevin A. Schneider']",https://arxiv.org/pdf/2309.06424,2023-09-12,,"Semantic and Cross-language code clone generation may be useful for code reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has potential in such clone generation as GPT is used for text generation. When developers copy/paste codes from Stack Overflow (SO) or within a system, there might be inconsistent changes leading to unexpected behaviours. Similarly, if someone possesses a code snippet in a particular programming language but seeks equivalent functionality in a different language, a semantic cross-language code clone generation approach could provide valuable assistance.In this study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3 model could help generate semantic and cross-language clone variants for a given fragment.We have comprised a diverse set of code fragments and assessed GPT-3s performance in generating code variants.Through extensive experimentation and analysis, where 9 judges spent 158 hours to validate, we investigate the model's ability to produce accurate and semantically correct variants. Our findings shed light on GPT-3's strengths in code generation, offering insights into the potential applications and challenges of using advanced language models in software development. Our quantitative analysis yields compelling results. In the realm of semantic clones, GPT-3 attains an impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot prompt engineering. Furthermore, the model shines in transcending linguistic confines, boasting an exceptional 91.25% accuracy in generating cross-language clones",073972fa0de48db1304509041e877e568c94e7de,Semantic Scholar,,, rtllm an opensource benchmark for design rtl generation with large language model,"['Yao Lu', 'Shang Liu', 'Qijun Zhang', 'Zhiyao Xie']",https://arxiv.org/pdf/2308.05345,2023-08-10,,"Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.",079be8c8a93fc80274ff22251a3dac9804bec66a,Semantic Scholar,,, userfriendly image editing with minimal text input leveraging captioning and injection techniques,"['Sunwoo Kim', 'Wooseok Jang', 'Hyunsung Kim', 'Junho Kim', 'Yunjey Choi', 'Seung Wook Kim', 'Gayeong Lee']",http://arxiv.org/pdf/2306.02717,2023-06-05,,"Recent text-driven image editing in diffusion models has shown remarkable success. However, the existing methods assume that the user's description sufficiently grounds the contexts in the source image, such as objects, background, style, and their relations. This assumption is unsuitable for real-world applications because users have to manually engineer text prompts to find optimal descriptions for different images. From the users' standpoint, prompt engineering is a labor-intensive process, and users prefer to provide a target word for editing instead of a full sentence. To address this problem, we first demonstrate the importance of a detailed text description of the source image, by dividing prompts into three categories based on the level of semantic details. Then, we propose simple yet effective methods by combining prompt generation frameworks, thereby making the prompt engineering process more user-friendly. Extensive qualitative and quantitative experiments demonstrate the importance of prompts in text-driven image editing and our method is comparable to ground-truth prompts.",0809c278fcdec2ce297da3a9d6e031fc192263f6,Semantic Scholar,,, a prompt pattern catalog to enhance prompt engineering with chatgpt,"['Jules White', 'Quchen Fu', 'Sam Hays', 'M. Sandborn', 'Carlos Olea', 'Henry Gilbert', 'Ashraf Elnashar', 'Jesse Spencer-Smith', 'D. Schmidt']",http://arxiv.org/pdf/2302.11382,2023-02-21,,"Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.",08b85bce712168998004ee80ce4e475390413c74,Semantic Scholar,,, design guidelines for prompt engineering texttoimage generative models,"['Vivian Liu', 'Lydia B. Chilton']",https://arxiv.org/pdf/2109.06977,2021-09-14,,"Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.",0968f1592f9401d72bf0d97e740496818c1a3135,Semantic Scholar,,, on codex prompt engineering for ocl generation an empirical study,"['Seif Abukhalaf', 'Mohammad Hamdaqa', 'Foutse Khomh']",https://arxiv.org/pdf/2303.16244,2023-03-29,,"The Object Constraint Language (OCL) is a declarative language that adds constraints and object query expressions to Meta-Object Facility (MOF) models. OCL can provide precision and conciseness to UML models. Nevertheless, the unfamiliar syntax of OCL has hindered its adoption by software practitioners. LLMs, such as GPT-3, have made significant progress in many NLP tasks, such as text generation and semantic parsing. Similarly, researchers have improved on the downstream tasks by fine-tuning LLMs for the target task. Codex, a GPT-3 descendant by OpenAI, has been fine-tuned on publicly available code from GitHub and has proven the ability to generate code in many programming languages, powering the AI-pair programmer Copilot. One way to take advantage of Codex is to engineer prompts for the target downstream task. In this paper, we investigate the reliability of the OCL constraints generated by Codex from natural language specifications. To achieve this, we compiled a dataset of 15 UML models and 168 specifications from various educational resources. We manually crafted a prompt template with slots to populate with the UML information and the target task in the prefix format to complete the template with the generated OCL constraint. We used both zero- and few-shot learning methods in the experiments. The evaluation is reported by measuring the syntactic validity and the execution accuracy metrics of the generated OCL constraints. Moreover, to get insight into how close or natural the generated OCL constraints are compared to human-written ones, we measured the cosine similarity between the sentence embedding of the correctly generated and human-written OCL constraints. Our findings suggest that by enriching the prompts with the UML information of the models and enabling few-shot learning, the reliability of the generated OCL constraints increases. Furthermore, the results reveal a close similarity based on sentence embedding between the generated OCL constraints and the human-written ones in the ground truth, implying a level of clarity and understandability in the generated OCL constraints by Codex.",0a0d6a98bd246a82aaaa9d33ec0eadf4ceae69dc,Semantic Scholar,,, visorgpt learning visual prior via generative pretraining,"['Jinheng Xie', 'Kai Ye', 'Yudong Li', 'Yuexiang Li', 'Kevin Lin', 'Yefeng Zheng', 'Linlin Shen', 'Mike Zheng Shou']",http://arxiv.org/pdf/2305.13777,2023-05-23,,"Various stuff and things in visual data possess specific traits, which can be learned by deep neural networks and are implicitly represented as the visual prior, e.g., object location and shape, in the model. Such prior potentially impacts many vision tasks. For example, in conditional image synthesis, spatial conditions failing to adhere to the prior can result in visually inaccurate synthetic results. This work aims to explicitly learn the visual prior and enable the customization of sampling. Inspired by advances in language modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes, human pose, and instance masks, into sequences, VisorGPT can model visual prior through likelihood maximization. Besides, prompt engineering is investigated to unify various visual locations and enable customized sampling of sequential outputs from the learned prior. Experimental results demonstrate that VisorGPT can effectively model the visual prior, which can be employed for many vision tasks, such as customizing accurate human pose for conditional image synthesis models like ControlNet. Code will be released at https://github.com/Sierkinhane/VisorGPT.",0a61802b71aa044cf1fe0e81befec148e0d5001b,Semantic Scholar,,, chatgpt for robotics design principles and model abilities,"['Sai Vemprala', 'Rogerio Bonatti', 'A. Bucker', 'Ashish Kapoor']",https://arxiv.org/pdf/2306.17582,2023-02-20,,"This paper presents an experimental study regarding the use of OpenAI's ChatGPT for robotics applications. We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library which allows ChatGPT to adapt to different robotics tasks, simulators, and form factors. We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies towards the execution of various types of robotics tasks. We explore ChatGPT's ability to use free-form dialog, parse XML tags, and to synthesize code, in addition to the use of task-specific prompting functions and closed-loop reasoning through dialogues. Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents. We show that ChatGPT can be effective at solving several of such tasks, while allowing users to interact with it primarily via natural language instructions. In addition to these studies, we introduce an open-sourced research tool called PromptCraft, which contains a platform where researchers can collaboratively upload and vote on examples of good prompting schemes for robotics applications, as well as a sample robotics simulator with ChatGPT integration, making it easier for users to get started with using ChatGPT for robotics.",0ba581718f294db1d7b3dbc159cc3d3380f74606,Semantic Scholar,,, a chat about boring problems studying gptbased text normalization,"['Yang Zhang', 'Travis M. Bartley', 'Mariana Graterol-Fuenmayor', 'Vitaly Lavrukhin', 'Evelina Bakhturina', 'Boris Ginsburg']",https://arxiv.org/pdf/2309.13426,2023-09-23,,"Text normalization - the conversion of text from written to spoken form - is traditionally assumed to be an ill-formed task for language models. In this work, we argue otherwise. We empirically show the capacity of Large-Language Models (LLM) for text normalization in few-shot scenarios. Combining self-consistency reasoning with linguistic-informed prompt engineering, we find LLM based text normalization to achieve error rates around 40\% lower than top normalization systems. Further, upon error analysis, we note key limitations in the conventional design of text normalization tasks. We create a new taxonomy of text normalization errors and apply it to results from GPT-3.5-Turbo and GPT-4.0. Through this new framework, we can identify strengths and weaknesses of GPT-based TN, opening opportunities for future work.",0c8446eedfe083e0ee32f5c4f793e5435904014a,Semantic Scholar,,, robust preference learning for storytelling via contrastive reinforcement learning,"['Louis Castricato', 'Alexander Havrilla', 'Shahbuland Matiana', 'M. Pieler', 'Anbang Ye', 'Ian Yang', 'Spencer Frazier', 'Mark O. Riedl']",http://arxiv.org/pdf/2210.07792,2022-10-14,,"Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning. However, simply fine-tuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences. To increase story generation robustness we further fine-tune the contrastive reward model using a prompt-learning technique. A human participant study is then conducted comparing generations from our full system, ablations, and two baselines. We show that the full fine-tuning pipeline results in a story generator preferred over a LLM 20x as large as well as logit-based methods. This motivates the use of contrastive learning for general purpose human preference modeling.",0e1ae0bdcc8469db99a4f8008288e20f285f1c6d,Semantic Scholar,,, towards equitable representation in texttoimage synthesis models with the crosscultural understanding benchmark (ccub) dataset,"['Zhixuan Liu', 'Y. Shin', 'Beverley-Claire Okogwu', 'Youngsik Yun', 'Lia Coleman', 'Peter Schaldenbrand', 'Jihie Kim', 'Jean Oh']",http://arxiv.org/pdf/2301.12073,2023-01-28,,"It has been shown that accurate representation in media improves the well-being of the people who consume it. By contrast, inaccurate representations can negatively affect viewers and lead to harmful perceptions of other cultures. To achieve inclusive representation in generated images, we propose a culturally-aware priming approach for text-to-image synthesis using a small but culturally curated dataset that we collected, known here as Cross-Cultural Understanding Benchmark (CCUB) Dataset, to fight the bias prevalent in giant datasets. Our proposed approach is comprised of two fine-tuning techniques: (1) Adding visual context via fine-tuning a pre-trained text-to-image synthesis model, Stable Diffusion, on the CCUB text-image pairs, and (2) Adding semantic context via automated prompt engineering using the fine-tuned large language model, GPT-3, trained on our CCUB culturally-aware text data. CCUB dataset is curated and our approach is evaluated by people who have a personal relationship with that particular culture. Our experiments indicate that priming using both text and image is effective in improving the cultural relevance and decreasing the offensiveness of generated images while maintaining quality.",0e8e3d2a2f4413808c7aff7bee6e8e11ec2700d7,Semantic Scholar,,, beyond factuality a comprehensive evaluation of large language models as knowledge generators,"['Liang Chen', 'Yang Deng', 'Yatao Bian', 'Zeyu Qin', 'Bingzhe Wu', 'Tat-Seng Chua', 'Kam-Fai Wong']",https://arxiv.org/pdf/2310.07289,2023-10-11,,"Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge. However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge. In light of this, we introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to systematically and automatically evaluate generated knowledge from six important perspectives -- Factuality, Relevance, Coherence, Informativeness, Helpfulness and Validity. We conduct an extensive empirical analysis of the generated knowledge from three different types of LLMs on two widely studied knowledge-intensive tasks, i.e., open-domain question answering and knowledge-grounded dialogue. Surprisingly, our study reveals that the factuality of generated knowledge, even if lower, does not significantly hinder downstream tasks. Instead, the relevance and coherence of the outputs are more important than small factual mistakes. Further, we show how to use CONNER to improve knowledge-intensive tasks by designing two strategies: Prompt Engineering and Knowledge Selection. Our evaluation code and LLM-generated knowledge with human annotations will be released to facilitate future research.",0f6fe87afd1a3571f77c790893b03717e5d0422a,Semantic Scholar,,, chatgpt4pcg competition characterlike level generation for science birds,"['Pittawat Taveekitworachai', 'Febri Abdullah', 'Mury F. Dewantoro', 'R. Thawonmas', 'J. Togelius', 'Jochen Renz']",https://arxiv.org/pdf/2303.15662,2023-03-28,,"This paper presents the first ChatGPT4PCG Competition at the 2023 IEEE Conference on Games. The objective of this competition is for participants to create effective prompts for ChatGPT–enabling it to generate Science Birds levels with high stability and character-like qualities–fully using their creativity as well as prompt engineering skills. ChatGPT is a conversational agent developed by OpenAI. Science Birds is selected as the competition platform because designing an Angry Birds-like level is not a trivial task due to the in-game gravity; the quality of the levels is determined by their stability. To lower the entry barrier to the competition, we limit the task to the generation of capitalized English alphabetical characters. We also allow only a single prompt to be used for generating all the characters. Here, the quality of the generated levels is determined by their stability and similarity to the given characters. A sample prompt is provided to participants for their reference. An experiment is conducted to determine the effectiveness of several modified versions of this sample prompt on level stability and similarity by testing them on several characters. To the best of our knowledge, we believe that ChatGPT4PCG is the first competition of its kind and hope to inspire enthusiasm for prompt engineering in procedural content generation.",0fb8f3f86476e9ab8fa4679620acb7d525b222a8,Semantic Scholar,,, contrastner contrastivebased prompt tuning for fewshot ner,"['Amirhossein Layegh', 'A. H. Payberah', 'A. Soylu', 'D. Roman', 'M. Matskin']",https://arxiv.org/pdf/2305.17951,2023-05-29,,"Prompt-based language models have produced encouraging results in numerous applications, including Named Entity Recognition (NER) tasks. NER aims to identify entities in a sentence and provide their types. However, the strong performance of most available NER approaches is heavily dependent on the design of discrete prompts and a verbalizer to map the model-predicted outputs to entity categories, which are complicated undertakings. To address these challenges, we present ContrastNER, a prompt-based NER framework that employs both discrete and continuous tokens in prompts and uses a contrastive learning approach to learn the continuous prompts and forecast entity types. The experimental results demonstrate that ContrastNER obtains competitive performance to the state-of-the-art NER methods in high-resource settings and outperforms the state-of-the-art models in low-resource circumstances without requiring extensive manual prompt engineering and verbalizer design.",1059b79598d6e08121503093f45d50fa963d2843,Semantic Scholar,,, prompting the hidden talent of webscale speech models for zeroshot task generalization,"['Puyuan Peng', 'Brian Yan', 'Shinji Watanabe', 'David F. Harwath']",https://arxiv.org/pdf/2305.11095,2023-05-18,,"We investigate the emergent abilities of the recently proposed web-scale speech model Whisper, by adapting it to unseen tasks with prompt engineering. We selected three tasks: audio-visual speech recognition (AVSR), code-switched speech recognition (CS-ASR), and speech translation (ST) on unseen language pairs. We design task-specific prompts, by either leveraging another large-scale model, or simply manipulating the special tokens in the default prompts. Experiments show that compared to the default prompts, our proposed prompts improve performance by 10% to 45% on the three zero-shot tasks, and even outperform SotA supervised models on some datasets. In addition, our experiments reveal many interesting properties of Whisper, including its robustness to prompts, bias on accents, and the multilingual understanding in its latent space. Code is available at https://github.com/jasonppy/PromptingWhisper",10e8dc07ea256c6a88d7043cf135417402ed38f4,Semantic Scholar,,, aicopilot for business optimisation a framework and a case study in production scheduling,"['Pivithuru Thejan Amarasinghe', 'Su Nguyen', 'Yuan Sun', 'D. Alahakoon']",https://arxiv.org/pdf/2309.13218,2023-09-22,,"Business optimisation refers to the process of finding and implementing efficient and cost-effective means of operation to bring a competitive advantage for businesses. Synthesizing problem formulations is an integral part of business optimisation, which relies on human expertise to construct problem formulations using optimisation languages. Interestingly, with advancements in Large Language Models (LLMs), the human expertise needed in problem formulation can be minimized. However, developing an LLM for problem formulation is challenging, due to training data, token limitations, and lack of appropriate performance metrics. For the requirement of training data, recent attention has been directed towards fine-tuning pre-trained LLMs for downstream tasks rather than training an LLM from scratch for a specific task. In this paper, we adopt an LLM fine-tuning approach and propose an AI-Copilot for business optimisation problem formulation. For token limitations, we introduce modularization and prompt engineering techniques to synthesize complex problem formulations as modules that fit into the token limits of LLMs. Additionally, we design performance evaluation metrics that are better suited for assessing the accuracy and quality of problem formulations. The experiment results demonstrate that with this approach we can synthesize complex and large problem formulations for a typical business optimisation problem in production scheduling.",13fafa40eb7b15813cdf6c2ead1e1032e7b085f0,Semantic Scholar,,, coaudit tools to help humans doublecheck aigenerated content,"['Andrew D. Gordon', 'Carina Negreanu', 'J. Cambronero', 'Rasika Chakravarthy', 'Ian Drosos', 'Hao Fang', 'Bhaskar Mitra', 'Hannah Richardson', 'Advait Sarkar', 'Stephanie Simmons', 'Jack Williams', 'Ben Zorn']",https://arxiv.org/pdf/2310.01297,2023-10-02,,"Users are increasingly being warned to check AI-generated content for correctness. Still, as LLMs (and other generative models) generate more complex output, such as summaries, tables, or code, it becomes harder for the user to audit or evaluate the output for quality or correctness. Hence, we are seeing the emergence of tool-assisted experiences to help the user double-check a piece of AI-generated content. We refer to these as co-audit tools. Co-audit tools complement prompt engineering techniques: one helps the user construct the input prompt, while the other helps them check the output response. As a specific example, this paper describes recent research on co-audit tools for spreadsheet computations powered by generative models. We explain why co-audit experiences are essential for any application of generative AI where quality is important and errors are consequential (as is common in spreadsheet computations). We propose a preliminary list of principles for co-audit, and outline research challenges.",14dcafae548d578f6b8c683d0972531bc46423ca,Semantic Scholar,,, chatgpt as a mapping assistant a novel method to enrich maps with generative ai and content derived from streetlevel photographs,"[""Levente Juh'asz"", 'P. Mooney', 'H. Hochmair', 'Boyuan Guan']",https://arxiv.org/pdf/2306.03204,2023-06-05,,"This paper explores the concept of leveraging generative AI as a mapping assistant for enhancing the efficiency of collaborative mapping. We present results of an experiment that combines multiple sources of volunteered geographic information (VGI) and large language models (LLMs). Three analysts described the content of crowdsourced Mapillary street-level photographs taken along roads in a small test area in Miami, Florida. GPT-3.5-turbo was instructed to suggest the most appropriate tagging for each road in OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a state-of-the-art multimodal pre-training method as an artificial analyst of street-level photographs in addition to human analysts. Results demonstrate two ways to effectively increase the accuracy of mapping suggestions without modifying the underlying AI models: by (1) providing a more detailed description of source photographs, and (2) combining prompt engineering with additional context (e.g. location and objects detected along a road). The first approach increases the suggestion accuracy by up to 29%, and the second one by up to 20%.",16877baf3874038233279e07e330f891455fd880,Semantic Scholar,,, using large language models to generate engaging captions for data visualizations,"['A. Liew', 'Klaus Mueller']",http://arxiv.org/pdf/2212.14047,2022-12-27,,"Creating compelling captions for data visualizations has been a long- standing challenge. Visualization researchers are typically untrained in journalistic reporting and hence the captions that are placed be- low data visualizations tend to be not overly engaging and rather just stick to basic observations about the data. In this work we explore the opportunities offered by the newly emerging crop of large language models (LLM) which use sophisticated deep learning technology to produce human-like prose. We ask, can these power-ful software devices be purposed to produce engaging captions for generic data visualizations like a scatterplot. It turns out that the key challenge lies in designing the most effective prompt for the LLM, a task called prompt engineering . We report on first experiments using the popular LLM GPT-3 and deliver some promising results.",1696e03a35f1bcc724ed9bfe69bb028b789415e8,Semantic Scholar,,, an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems,"['Andreas Metzger', 'Jon Bartel', 'Jan Laufer']",https://arxiv.org/pdf/2309.14391,2023-09-25,,"Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.",16acd2d2faa236dfe5f6ab67a0b94a9ed1b1de57,Semantic Scholar,,, "chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations","['Chunkit Chan', 'Cheng Jiayang', 'Weiqi Wang', 'Yuxin Jiang', 'Tianqing Fang', 'Xin Liu', 'Yangqiu Song']",http://arxiv.org/pdf/2304.14827,2023-04-28,,"This paper aims to quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations such as temporal relations, causal relations, and discourse relations. Given ChatGPT's promising performance across various tasks, we proceed to carry out thorough evaluations on the whole test sets of 11 datasets, including temporal and causal relations, PDTB2.0-based, and dialogue-based discourse relations. To ensure the reliability of our findings, we employ three tailored prompt templates for each task, including the zero-shot prompt template, zero-shot prompt engineering (PE) template, and in-context learning (ICL) prompt template, to establish the initial baseline scores for all popular sentence-pair relation classification tasks for the first time. Through our study, we discover that ChatGPT exhibits exceptional proficiency in detecting and reasoning about causal relations, albeit it may not possess the same level of expertise in identifying the temporal order between two events. While it is capable of identifying the majority of discourse relations with existing explicit discourse connectives, the implicit discourse relation remains a formidable challenge. Concurrently, ChatGPT demonstrates subpar performance in the dialogue discourse parsing task that requires structural understanding in a dialogue before being aware of the discourse relation.",186e96fe036927182ec963b63f9dd7f8ff650158,Semantic Scholar,,, prompting ai art an investigation into the creative skill of prompt engineering,"['J. Oppenlaender', 'Rhema Linder', 'Johanna M. Silvennoinen']",http://arxiv.org/pdf/2303.13534,2023-03-13,,"We are witnessing a novel era of creativity where anyone can create digital content via prompt-based learning (known as prompt engineering). This paper delves into prompt engineering as a novel creative skill for creating AI art with text-to-image generation. In a pilot study, we find that many crowdsourced participants have knowledge about art which could be used for writing effective prompts. In three subsequent studies, we explore whether crowdsourced participants can put this knowledge into practice. We examine if participants can 1) discern prompt quality, 2) write prompts, and 3) refine prompts. We find that participants could evaluate prompt quality and crafted descriptive prompts, but they lacked style-specific vocabulary necessary for effective prompting. This is in line with our hypothesis that prompt engineering is a new type of skill that is non-intuitive and must first be acquired (e.g., through means of practice and learning) before it can be used. Our studies deepen our understanding of prompt engineering and chart future research directions. We offer nine guidelines for conducting research on text-to-image generation and prompt engineering with paid crowds. We conclude by envisioning four potential futures for prompt engineering.",1bc9974780230573bfe9f89789115cb4fbf8bfc6,Semantic Scholar,,, solving and generating npr sunday puzzles with large language models,"['Jin Zhao', 'Carolyn Jane Anderson']",http://arxiv.org/pdf/2306.12255,2023-06-21,,"We explore the ability of large language models to solve and generate puzzles from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15 years of on-air puzzles. We evaluate four large language models using PUZZLEQA, in both multiple choice and free response formats, and explore two prompt engineering techniques to improve free response performance: chain-of-thought reasoning and prompt summarization. We find that state-of-the-art large language models can solve many PUZZLEQA puzzles: the best model, GPT-3.5, achieves 50.2% loose accuracy. However, in our few-shot puzzle generation experiment, we find no evidence that models can generate puzzles: GPT-3.5 generates puzzles with answers that do not conform to the generated rules. Puzzle generation remains a challenging task for future work.",1e5743366625128e225879dbcfb568f6b8f1bcdc,Semantic Scholar,,, "multimethod selftraining improving code generation with text, and vice versa","['Shriyash Upadhyay', 'Etan Ginsberg']",https://arxiv.org/pdf/2307.10633,2023-07-20,,"Large Language Models have many methods for solving the same problem. This introduces novel strengths (different methods may work well for different problems) and weaknesses (it may be difficult for users to know which method to use). In this paper, we introduce Multi-Method Self-Training (MMST), where one method is trained on the filtered outputs of another, allowing us to augment the strengths and ameliorate the weaknesses of each method. Using a 176B parameter model trained on both language and code, we show that MMST can 1) improve the less performant method (up to 30%) making the model easier to use, 2) improve the more performant method (up to 32.2%) making the model more performant, and 3) improve the performance of related but distinct tasks (up to 10.3%) by improving the ability of the model to generate rationales. We then conduct ablation analyses to explore why MMST works. We show that MMST generates more data than traditional self-training, but the improvement in performance is driven by the use of multiple methods. We also analyze prompt-engineering and anti-correlated performance between methods as means of making MMST more effective. We hope the evidence from our paper motivates machine learning researchers to explore ways in which advances in language models allow for new forms of training.",20d448a8712238ea34d9a18287e3bf05bc61dd2c,Semantic Scholar,,, unsupervised human activity recognition through twostage prompting with chatgpt,"['Qingxin Xia', 'T. Maekawa', 'Takahiro Hara']",http://arxiv.org/pdf/2306.02140,2023-06-03,,"Wearable sensor devices, which offer the advantage of recording daily objects used by a person while performing an activity, enable the feasibility of unsupervised Human Activity Recognition (HAR). Unfortunately, previous unsupervised approaches using the usage sequence of objects usually require a proper description of activities manually prepared by humans. Instead, we leverage the knowledge embedded in a Large Language Model (LLM) of ChatGPT. Because the sequence of objects robustly characterizes the activity identity, it is possible that ChatGPT already learned the association between activities and objects from existing contexts. However, previous prompt engineering for ChatGPT exhibits limited generalization ability when dealing with a list of words (i.e., sequence of objects) due to the similar weighting assigned to each word in the list. In this study, we propose a two-stage prompt engineering, which first guides ChatGPT to generate activity descriptions associated with objects while emphasizing important objects for distinguishing similar activities; then outputs activity classes and explanations for enhancing the contexts that are helpful for HAR. To the best of our knowledge, this is the first study that utilizes ChatGPT to recognize activities using objects in an unsupervised manner. We conducted our approach on three datasets and demonstrated the state-of-the-art performance.",20db2ac68c0a0daa8417696cced923e518c07681,Semantic Scholar,,, s3 socialnetwork simulation system with large language modelempowered agents,"['Chen Gao', 'Xiaochong Lan', 'Zhi-jie Lu', 'Jinzhu Mao', 'J. Piao', 'Huandong Wang', 'Depeng Jin', 'Yong Li']",https://arxiv.org/pdf/2307.14984,2023-07-27,,"Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.",221a72a3631ebf8b555c27bc864338390611feb1,Semantic Scholar,,, geotechnical parrot tales (gpt) harnessing large language models in geotechnical engineering,['Krishna Kumar'],http://arxiv.org/pdf/2304.02138,2023-04-04,,"The widespread adoption of large language models (LLMs), such as OpenAI's ChatGPT, could revolutionize various industries, including geotechnical engineering. However, GPT models can sometimes generate plausible-sounding but false outputs, leading to hallucinations. In this article, we discuss the importance of prompt engineering in mitigating these risks and harnessing the full potential of GPT for geotechnical applications. We explore the challenges and pitfalls associated with LLMs and highlight the role of context in ensuring accurate and valuable responses. Furthermore, we examine the development of context-specific search engines and the potential of LLMs to become a natural interface for complex tasks, such as data analysis and design. We also develop a unified interface using natural language to handle complex geotechnical engineering tasks and data analysis. By integrating GPT into geotechnical engineering workflows, professionals can streamline their work and develop sustainable and resilient infrastructure systems for the future.",26f560e592419891c9de1b25d0e4d4d16014d54e,Semantic Scholar,,, toward reproducing network research results using large language models,"['Qiao Xiang', 'Yuling Lin', 'Mingjun Fang', 'Bang Huang', 'Siyong Huang', 'Ridi Wen', 'Franck Le', 'L. Kong', 'Jiwu Shu']",https://arxiv.org/pdf/2309.04716,2023-09-09,,"Reproducing research results is important for the networking community. The current best practice typically resorts to: (1) looking for publicly available prototypes; (2) contacting the authors to get a private prototype; or (3) manually implementing a prototype following the description of the publication. However, most published network research does not have public prototypes and private ones are hard to get. As such, most reproducing efforts are spent on manual implementation based on the publications, which is both time and labor consuming and error-prone. In this paper, we boldly propose reproducing network research results using the emerging large language models (LLMs). We first prove its feasibility with a small-scale experiment, in which four students with essential networking knowledge each reproduces a different networking system published in prominent conferences and journals by prompt engineering ChatGPT. We report our observations and lessons and discuss future open research questions of this proposal.",279c798fd53c8dc84044273d08b6a060dbe9f702,Semantic Scholar,,, inducing anxiety in large language models increases exploration and bias,"['Julian Coda-Forno', 'Kristin Witte', 'A. Jagadish', 'Marcel Binz', 'Zeynep Akata', 'Eric Schulz']",http://arxiv.org/pdf/2304.11111,2023-04-21,,"Large language models are transforming research on machine learning while galvanizing public debates. Understanding not only when these models work well and succeed but also why they fail and misbehave is of great societal relevance. We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models. We focus on the Generative Pre-Trained Transformer 3.5 and subject it to tasks commonly studied in psychiatry. Our results show that GPT-3.5 responds robustly to a common anxiety questionnaire, producing higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be predictably changed by using emotion-inducing prompts. Emotion-induction not only influences GPT-3.5's behavior in a cognitive task measuring exploratory decision-making but also influences its behavior in a previously-established task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a strong increase in biases when prompted with anxiety-inducing text. Thus, it is likely that how prompts are communicated to large language models has a strong influence on their behavior in applied settings. These results progress our understanding of prompt engineering and demonstrate the usefulness of methods taken from computational psychiatry for studying the capable algorithms to which we increasingly delegate authority and autonomy.",27c16cca907aa43397cc226a182b73b396c5cf66,Semantic Scholar,,, conceptual design generation using large language models,"['Kevin Ma', 'Daniele Grandi', 'Christopher McComb', 'K. Goucher-Lambert']",http://arxiv.org/pdf/2306.01779,2023-05-30,," Concept generation is a creative step in the conceptual design phase, where designers often turn to brainstorming, mindmapping, or crowdsourcing design ideas to complement their own knowledge of the domain. Recent advances in natural language processing (NLP) and machine learning (ML) have led to the rise of Large Language Models (LLMs) capable of generating seemingly creative outputs from textual prompts. The success of these models has led to their integration and application across a variety of domains, including art, entertainment, and other creative work. In this paper, we leverage LLMs to generate solutions for a set of 12 design problems and compare them to a baseline of crowdsourced solutions. We evaluate the differences between generated and crowdsourced design solutions through multiple perspectives, including human expert evaluations and computational metrics. Expert evaluations indicate that the LLM-generated solutions have higher average feasibility and usefulness while the crowdsourced solutions have more novelty. We experiment with prompt engineering and find that leveraging few-shot learning can lead to the generation of solutions that are more similar to the crowdsourced solutions. These findings provide insight into the quality of design solutions generated with LLMs and begins to evaluate prompt engineering techniques that could be leveraged by practitioners to generate higher-quality design solutions synergistically with LLMs.",29203f0b8b9be7fd70d99bf7390c6a78b68a9289,Semantic Scholar,,, fixing hardware security bugs with large language models,"['Baleegh Ahmad', 'Shailja Thakur', 'Benjamin Tan', 'R. Karri', 'H. Pearce']",http://arxiv.org/pdf/2302.01215,2023-02-02,,"Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI's Codex have demonstrated capabilities in many coding-adjacent domains. In this work we consider how LLMs maybe leveraged to automatically repair security relevant bugs present in hardware designs. We focus on bug repair in code written in the Hardware Description Language Verilog. For this study we build a corpus of domain-representative hardware security bugs. We then design and implement a framework to quantitatively evaluate the performance of any LLM tasked with fixing the specified bugs. The framework supports design space exploration of prompts (i.e., prompt engineering) and identifying the best parameters for the LLM. We show that an ensemble of LLMs can repair all ten of our benchmarks. This ensemble outperforms the state-of-the-art Cirfix hardware bug repair tool on its own suite of bugs. These results show that LLMs can repair hardware security bugs and the framework is an important step towards the ultimate goal of an automated end-to-end bug repair framework.",2af6a21a1b682ceb585165359d3605e89f4cf6b0,Semantic Scholar,,, toxicity detection with generative promptbased inference,"['Yau-Shian Wang', 'Y. Chang']",https://arxiv.org/pdf/2205.12390,2022-05-24,,"Due to the subtleness, implicity, and different possible interpretations perceived by different people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk that language models (LMs), once trained on corpus containing undesirable content, have the power to manifest biases and toxicity. However, recent studies imply that, as a remedy, LMs are also capable of identifying toxic content without additional fine-tuning. Prompt-methods have been shown to effectively harvest this surprising self-diagnosing capability. However, existing prompt-based methods usually specify an instruction to a language model in a discriminative way. In this work, we explore the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering. We evaluate on three datasets with toxicity labels annotated on social media posts. Our analysis highlights the strengths of our generative classification approach both quantitatively and qualitatively. Interesting aspects of self-diagnosis and its ethical implications are discussed.",2afb07359e9c67499e1f373ac6f1520d3ea9c46a,Semantic Scholar,,, exploring efl students' prompt engineering in humanai story writing an activity theory perspective,"['D. Woo', 'Kai Guo', 'Hengky Susanto']",http://arxiv.org/pdf/2306.01798,2023-06-01,,"This study applies Activity Theory to investigate how English as a foreign language (EFL) students prompt generative artificial intelligence (AI) tools during short story writing. Sixty-seven Hong Kong secondary school students created generative-AI tools using open-source language models and wrote short stories with them. The study collected and analyzed the students' generative-AI tools, short stories, and written reflections on their conditions or purposes for prompting. The research identified three main themes regarding the purposes for which students prompt generative-AI tools during short story writing: a lack of awareness of purposes, overcoming writer's block, and developing, expanding, and improving the story. The study also identified common characteristics of students' activity systems, including the sophistication of their generative-AI tools, the quality of their stories, and their school's overall academic achievement level, for their prompting of generative-AI tools for the three purposes during short story writing. The study's findings suggest that teachers should be aware of students' purposes for prompting generative-AI tools to provide tailored instructions and scaffolded guidance. The findings may also help designers provide differentiated instructions for users at various levels of story development when using a generative-AI tool.",2bb34cfe22d0d46394dd91ba8934e525563e1274,Semantic Scholar,,, pre visionlanguage prompt learning with reparameterization encoder,['Anh Pham Thi Minh'],https://arxiv.org/pdf/2309.07760,2023-09-14,,"Large pre-trained vision-language models such as CLIP have demonstrated great potential in zero-shot transferability to downstream tasks. However, to attain optimal performance, the manual selection of prompts is necessary to improve alignment between the downstream image distribution and the textual class descriptions. This manual prompt engineering is the major challenge for deploying such models in practice since it requires domain expertise and is extremely time-consuming. To avoid non-trivial prompt engineering, recent work Context Optimization (CoOp) introduced the concept of prompt learning to the vision domain using learnable textual tokens. While CoOp can achieve substantial improvements over manual prompts, its learned context is worse generalizable to wider unseen classes within the same dataset. In this work, we present Prompt Learning with Reparameterization Encoder (PRE) - a simple and efficient method that enhances the generalization ability of the learnable prompt to unseen classes while maintaining the capacity to learn Base classes. Instead of directly optimizing the prompts, PRE employs a prompt encoder to reparameterize the input prompt embeddings, enhancing the exploration of task-specific knowledge from few-shot samples. Experiments and extensive ablation studies on 8 benchmarks demonstrate that our approach is an efficient method for prompt learning. Specifically, PRE achieves a notable enhancement of 5.60% in average accuracy on New classes and 3% in Harmonic mean compared to CoOp in the 16-shot setting, all achieved within a good training time.",2c66f49e328ca5815c13dda106abc2c326d4f28b,Semantic Scholar,,, chainforge a visual toolkit for prompt engineering and llm hypothesis testing,"['Ian Arawjo', 'Chelse Swoopes', 'Priyan Vaithilingam', 'Martin Wattenberg', 'Elena L. Glassman']",https://arxiv.org/pdf/2309.09128,2023-09-17,,"Evaluating outputs of large language models (LLMs) is challenging, requiring making -- and making sense of -- many responses. Yet tools that go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.",2ed64d90670177bf58cdce6bda04a48a8731a18f,Semantic Scholar,,, accelerated materials language processing enabled by gpt,"['Jaewoong Choi', 'Byungju Lee']",https://arxiv.org/pdf/2308.09354,2023-08-18,,"Materials language processing (MLP) is one of the key facilitators of materials science research, as it enables the extraction of structured information from massive materials science literature. Prior works suggested high-performance MLP models for text classification, named entity recognition (NER), and extractive question answering (QA), which require complex model architecture, exhaustive fine-tuning and a large number of human-labelled datasets. In this study, we develop generative pretrained transformer (GPT)-enabled pipelines where the complex architectures of prior MLP models are replaced with strategic designs of prompt engineering. First, we develop a GPT-enabled document classification method for screening relevant documents, achieving comparable accuracy and reliability compared to prior models, with only small dataset. Secondly, for NER task, we design an entity-centric prompts, and learning few-shot of them improved the performance on most of entities in three open datasets. Finally, we develop an GPT-enabled extractive QA model, which provides improved performance and shows the possibility of automatically correcting annotations. While our findings confirm the potential of GPT-enabled MLP models as well as their value in terms of reliability and practicability, our scientific methods and systematic approach are applicable to any materials science domain to accelerate the information extraction of scientific literature.",3034d8571e16e25c6a839bf492f20daf855d04a0,Semantic Scholar,,, "a sign language recognition system with pepper, lightweighttransformer, and llm","['Jongyoon Lim', 'Inkyu Sa', 'Bruce A. MacDonald', 'Ho Seok Ahn']",https://arxiv.org/pdf/2309.16898,2023-09-28,,"This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL) and facilitate non-verbal human-robot interaction. First, we introduce a lightweight and efficient model for ASL understanding optimized for embedded systems, ensuring rapid sign recognition while conserving computational resources. Building upon this, we employ large language models (LLMs) for intelligent robot interactions. Through intricate prompt engineering, we tailor interactions to allow the Pepper Robot to generate natural Co-Speech Gesture responses, laying the foundation for more organic and intuitive humanoid-robot dialogues. Finally, we present an integrated software pipeline, embodying advancements in a socially aware AI interaction model. Leveraging the Pepper Robot's capabilities, we demonstrate the practicality and effectiveness of our approach in real-world scenarios. The results highlight a profound potential for enhancing human-robot interaction through non-verbal interactions, bridging communication gaps, and making technology more accessible and understandable.",31e04aec55f749dc560afe1d8673112f9b32f46b,Semantic Scholar,,, cases of efl secondary students' prompt engineering pathways to complete a writing task with chatgpt,"['D. Woo', 'Kai Guo', 'Hengky Susanto']",https://arxiv.org/pdf/2307.05493,2023-06-19,,"ChatGPT is a state-of-the-art (SOTA) chatbot. Although it has potential to support English as a foreign language (EFL) students' writing, to effectively collaborate with it, a student must learn to engineer prompts, that is, the skill of crafting appropriate instructions so that ChatGPT produces desired outputs. However, writing an appropriate prompt for ChatGPT is not straightforward for non-technical users who suffer a trial-and-error process. This paper examines the content of EFL students' ChatGPT prompts when completing a writing task and explores patterns in the quality and quantity of the prompts. The data come from iPad screen recordings of secondary school EFL students who used ChatGPT and other SOTA chatbots for the first time to complete the same writing task. The paper presents a case study of four distinct pathways that illustrate the trial-and-error process and show different combinations of prompt content and quantity. The cases contribute evidence for the need to provide prompt engineering education in the context of the EFL writing classroom, if students are to move beyond an individual trial-and-error process, learning a greater variety of prompt content and more sophisticated prompts to support their writing.",344f801663a76aa15e0dd13344261d8648c382a2,Semantic Scholar,,, "llm self defense by self examination, llms know they are being tricked","['Alec Helbling', 'Mansi Phute', 'Matthew Hull', 'Duen Horng Chau']",https://arxiv.org/pdf/2308.07308,2023-08-14,,"Large language models (LLMs) are popular for high-quality text generation but can produce harmful content, even when aligned with human values through reinforcement learning. Adversarial prompts can bypass their safety measures. We propose LLM Self Defense, a simple approach to defend against these attacks by having an LLM screen the induced responses. Our method does not require any fine-tuning, input preprocessing, or iterative output generation. Instead, we incorporate the generated content into a pre-defined prompt and employ another instance of an LLM to analyze the text and predict whether it is harmful. We test LLM Self Defense on GPT 3.5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks. Notably, LLM Self Defense succeeds in reducing the attack success rate to virtually 0 using both GPT 3.5 and Llama 2.",34f9c825ba24889fa5e164ba9f99bfe4fc2f3e61,Semantic Scholar,,, chils zeroshot image classification with hierarchical label sets,"['Zachary Novack', 'S. Garg', 'Julian McAuley', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2302.02551,2023-02-06,,"Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot classification through their ability generate embeddings for each class based on their (natural language) names. Prior work has focused on improving the accuracy of these models through prompt engineering or by incorporating a small amount of labeled downstream data (via finetuning). However, there has been little focus on improving the richness of the class names themselves, which can pose issues when class labels are coarsely-defined and are uninformative. We propose Classification with Hierarchical Label Sets (or CHiLS), an alternative strategy for zero-shot classification specifically designed for datasets with implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each class, produce a set of subclasses, using either existing label hierarchies or by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though these subclasses were the labels of interest; (iii) map the predicted subclass back to its parent to produce the final prediction. Across numerous datasets with underlying hierarchical structure, CHiLS leads to improved accuracy in situations both with and without ground-truth hierarchical information. CHiLS is simple to implement within existing zero-shot pipelines and requires no additional training cost. Code is available at: https://github.com/acmi-lab/CHILS.",34fd95dd4dd32e704d4284fc31165e85b303bb1e,Semantic Scholar,,, flows building blocks of reasoning and collaborating ai,"['Martin Josifoski', 'Lars Klein', 'Maxime Peyrard', 'Yifei Li', 'Saibo Geng', 'Julian Paul Schnitzler', 'Yuxing Yao', 'Jiheng Wei', 'Debjit Paul', 'Robert West']",https://arxiv.org/pdf/2308.01285,2023-08-02,,"Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize this potential, it is essential to develop a principled way of designing and studying such structured interactions. For this purpose, we introduce the conceptual framework Flows. Flows are self-contained building blocks of computation, with an isolated state, communicating through a standardized message-based interface. This modular design simplifies the process of creating Flows by allowing them to be recursively composed into arbitrarily nested interactions and is inherently concurrency-friendly. Crucially, any interaction can be implemented using this framework, including prior work on AI-AI and human-AI interactions, prompt engineering schemes, and tool augmentation. We demonstrate the potential of Flows on competitive coding, a challenging task on which even GPT-4 struggles. Our results suggest that structured reasoning and collaboration substantially improve generalization, with AI-only Flows adding +21 and human-AI Flows adding +54 absolute points in terms of solve rate. To support rapid and rigorous research, we introduce the aiFlows library embodying Flows. The aiFlows library is available at https://github.com/epfl-dlab/aiflows. Data and Flows for reproducing our experiments are available at https://github.com/epfl-dlab/cc_flows.",377d4d6c1be01b9df32edfd94b2c5946971b0108,Semantic Scholar,,, thought propagation an analogical approach to complex reasoning with large language models,"['Junchi Yu', 'Ran He', 'Rex Ying']",https://arxiv.org/pdf/2310.03965,2023-10-06,,"Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason \textit{from scratch}. To address these issues, we propose \textbf{\textit{Thought Propagation} (TP)}, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs. These analogous problems are related to the input one, with reusable solutions and problem-solving strategies. Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch. TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12\% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13\% improvement of human preference in Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent Planning.",3784fd84b61d482b52f7ef72aac66bcb886b892b,Semantic Scholar,,, prompt engineering for healthcare methodologies and applications,"['Jiaqi Wang', 'Enze Shi', 'Sigang Yu', 'Zihao Wu', 'Chong Ma', 'Haixing Dai', 'Qiushi Yang', 'Yanqing Kang', 'Jinru Wu', 'Huawen Hu', 'Chenxi Yue', 'Haiyang Zhang', 'Yi-Hsueh Liu', 'Xiang Li', 'Bao Ge', 'Dajiang Zhu', 'Yixuan Yuan', 'Dinggang Shen', 'Tianming Liu', 'Shu Zhang']",http://arxiv.org/pdf/2304.14670,2023-04-28,,"This review will introduce the latest advances in prompt engineering in the field of natural language processing (NLP) for the medical domain. First, we will provide a brief overview of the development of prompt engineering and emphasize its significant contributions to healthcare NLP applications such as question-answering systems, text summarization, and machine translation. With the continuous improvement of general large language models, the importance of prompt engineering in the healthcare domain is becoming increasingly prominent. The aim of this article is to provide useful resources and bridges for healthcare NLP researchers to better explore the application of prompt engineering in this field. We hope that this review can provide new ideas and inspire ample possibilities for research and application in medical NLP.",385376b8aa48c25403f17d6206db7c09b67e1314,Semantic Scholar,,, parafuzz an interpretabilitydriven technique for detecting poisoned samples in nlp,"['Lu Yan', 'Zhuo Zhang', 'Guanhong Tao', 'Kaiyuan Zhang', 'Xuan Chen', 'Guangyu Shen', 'Xiangyu Zhang']",https://arxiv.org/pdf/2308.02122,2023-08-04,,"Backdoor attacks have emerged as a prominent threat to natural language processing (NLP) models, where the presence of specific triggers in the input can lead poisoned models to misclassify these inputs to predetermined target classes. Current detection mechanisms are limited by their inability to address more covert backdoor strategies, such as style-based attacks. In this work, we propose an innovative test-time poisoned sample detection framework that hinges on the interpretability of model predictions, grounded in the semantic meaning of inputs. We contend that triggers (e.g., infrequent words) are not supposed to fundamentally alter the underlying semantic meanings of poisoned samples as they want to stay stealthy. Based on this observation, we hypothesize that while the model's predictions for paraphrased clean samples should remain stable, predictions for poisoned samples should revert to their true labels upon the mutations applied to triggers during the paraphrasing process. We employ ChatGPT, a state-of-the-art large language model, as our paraphraser and formulate the trigger-removal task as a prompt engineering problem. We adopt fuzzing, a technique commonly used for unearthing software vulnerabilities, to discover optimal paraphrase prompts that can effectively eliminate triggers while concurrently maintaining input semantics. Experiments on 4 types of backdoor attacks, including the subtle style backdoors, and 4 distinct datasets demonstrate that our approach surpasses baseline methods, including STRIP, RAP, and ONION, in precision and recall.",3a733c27bff68259b17dc4f835b0d192ac8fab70,Semantic Scholar,,, transforming sentiment analysis in the financial domain with chatgpt,"['G. Fatouros', 'J. Soldatos', 'Kalliopi Kouroumali', 'Georgios Makridis', 'D. Kyriazis']",https://arxiv.org/pdf/2308.07935,2023-08-13,,"Financial sentiment analysis plays a crucial role in decoding market trends and guiding strategic trading decisions. Despite the deployment of advanced deep learning techniques and language models to refine sentiment analysis in finance, this study breaks new ground by investigating the potential of large language models, particularly ChatGPT 3.5, in financial sentiment analysis, with a strong emphasis on the foreign exchange market (forex). Employing a zero-shot prompting approach, we examine multiple ChatGPT prompts on a meticulously curated dataset of forex-related news headlines, measuring performance using metrics such as precision, recall, f1-score, and Mean Absolute Error (MAE) of the sentiment class. Additionally, we probe the correlation between predicted sentiment and market returns as an additional evaluation approach. ChatGPT, compared to FinBERT, a well-established sentiment analysis model for financial texts, exhibited approximately 35\% enhanced performance in sentiment classification and a 36\% higher correlation with market returns. By underlining the significance of prompt engineering, particularly in zero-shot contexts, this study spotlights ChatGPT's potential to substantially boost sentiment analysis in financial applications. By sharing the utilized dataset, our intention is to stimulate further research and advancements in the field of financial services.",3c4f1244301577cffff9affc73690669725e7e08,Semantic Scholar,,, enhancing clip with gpt4 harnessing visual descriptions as prompts,"['Mayug Maniparambil', 'Chris Vorster', 'D. Molloy', 'N. Murphy', 'Kevin McGuinness', ""Noel E. O'Connor""]",https://doras.dcu.ie/28982/1/MMFM-2.pdf,2023-07-21,,"Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on downstream datasets. VLMs are 0-shot adapted to a downstream dataset by designing prompts that are relevant to the dataset. Such prompt engineering makes use of domain expertise and a validation dataset. Meanwhile, recent developments in generative pretrained models like GPT-4 mean they can be used as advanced internet search tools. They can also be manipulated to provide visual information in any structure. In this work, we show that GPT-4 can be used to generate text that is visually descriptive and how this can be used to adapt CLIP to downstream tasks. We show considerable improvements in 0-shot transfer accuracy on specialized fine-grained datasets like EuroSAT (~7%), DTD (~ 7%), SUN397 (~ 4.6%), and CUB ( ~3.3%) when compared to CLIP’s default prompt. We also design a simple few-shot adapter that learns to choose the best possible sentences to construct generalizable classifiers that outperform the recently proposed CoCoOP by ~2% on average and by over 4% on 4 specialized fine-grained datasets. The code, prompts, and auxiliary text dataset is available at github.com/mayug/VDT-Adapter.",3e0a691277183a6704310af3e4e9e271400612bc,Semantic Scholar,,, large language models as data preprocessors,"['Haochen Zhang', 'Yuyang Dong', 'Chuan Xiao', 'M. Oyamada']",https://arxiv.org/pdf/2308.16361,2023-08-30,,"Large Language Models (LLMs), typified by OpenAI's GPT series and Meta's LLaMA variants, have marked a significant advancement in artificial intelligence. Trained on vast amounts of text data, LLMs are capable of understanding and generating human-like text across a diverse range of topics. This study expands on the applications of LLMs, exploring their potential in data preprocessing, a critical stage in data mining and analytics applications. We delve into the applicability of state-of-the-art LLMs such as GPT-3.5, GPT-4, and Vicuna-13B for error detection, data imputation, schema matching, and entity matching tasks. Alongside showcasing the inherent capabilities of LLMs, we highlight their limitations, particularly in terms of computational expense and inefficiency. We propose an LLM-based framework for data preprocessing, which integrates cutting-edge prompt engineering techniques, coupled with traditional methods like contextualization and feature selection, to improve the performance and efficiency of these models. The effectiveness of LLMs in data preprocessing is evaluated through an experimental study spanning 12 datasets. GPT-4 emerged as a standout, achieving 100\% accuracy or F1 score on 4 datasets, suggesting LLMs' immense potential in these tasks. Despite certain limitations, our study underscores the promise of LLMs in this domain and anticipates future developments to overcome current hurdles.",3e1ca026052d30e3b9677e363616fae23f6616df,Semantic Scholar,,, revisiting prompt engineering via declarative crowdsourcing,"['Aditya G. Parameswaran', 'Shreya Shankar', 'Parth Asawa', 'Naman Jain', 'Yujie Wang']",https://arxiv.org/pdf/2308.03854,2023-08-07,,"Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone. There has been an advent of toolkits and recipes centered around so-called prompt engineering-the process of asking an LLM to do something via a series of prompts. However, for LLM-powered data processing workflows, in particular, optimizing for quality, while keeping cost bounded, is a tedious, manual process. We put forth a vision for declarative prompt engineering. We view LLMs like crowd workers and leverage ideas from the declarative crowdsourcing literature-including leveraging multiple prompting strategies, ensuring internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make prompt engineering a more principled process. Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approach",3e4991bd206214f596a10e9932cd441fe5bd1f8c,Semantic Scholar,,, demonstrations of the potential of aibased political issue polling,"['Nathan Sanders', 'Alex Ulinich', 'B. Schneier']",https://arxiv.org/pdf/2307.04781,2023-07-10,,"Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world. However, it has been challenged by factors that stress its cost, availability, and accuracy. At the same time, artificial intelligence (AI) chatbots have become compelling stand-ins for human behavior, powered by increasingly sophisticated large language models (LLMs). Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms? We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification. We execute large scale experiments, querying for thousands of simulated responses at a cost far lower than human surveys. We compare simulated data to human issue polling data from the Cooperative Election Study (CES). We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their ideological breakdown (correlation typically>85%). However, it is less successful at anticipating demographic-level differences. Moreover, ChatGPT tends to overgeneralize to new policy issues that arose after its training data was collected, such as US support for involvement in the war in Ukraine. Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain. (Abridged)",407a8d6227ece351d9870f96576d4c287a746166,Semantic Scholar,,, scalable 3d captioning with pretrained models,"['Tiange Luo', 'C. Rockwell', 'Honglak Lee', 'Justin Johnson']",http://arxiv.org/pdf/2306.07279,2023-06-12,,"We introduce Cap3D, an automatic approach for generating descriptive text for 3D objects. This approach utilizes pretrained models from image captioning, image-text alignment, and LLM to consolidate captions from multiple views of a 3D asset, completely side-stepping the time-consuming and costly process of manual annotation. We apply Cap3D to the recently introduced large-scale 3D dataset, Objaverse, resulting in 660k 3D-text pairs. Our evaluation, conducted using 41k human annotations from the same dataset, demonstrates that Cap3D surpasses human-authored descriptions in terms of quality, cost, and speed. Through effective prompt engineering, Cap3D rivals human performance in generating geometric descriptions on 17k collected annotations from the ABO dataset. Finally, we finetune Text-to-3D models on Cap3D and human captions, and show Cap3D outperforms; and benchmark the SOTA including Point-E, Shape-E, and DreamFusion.",4279a38a098d1d359881b73c6a88a112fe93443a,Semantic Scholar,,, interactive data synthesis for systematic vision adaptation via llmsaigcs collaboration,"['Qifan Yu', 'Juncheng Li', 'Wentao Ye', 'Siliang Tang', 'Yueting Zhuang']",http://arxiv.org/pdf/2305.12799,2023-05-22,,"Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. In parallel, the problem of data scarcity has brought a growing interest in employing AIGC technology for high-quality data expansion. However, this paradigm requires well-designed prompt engineering that cost-less data expansion and labeling remain under-explored. Inspired by LLM's powerful capability in task guidance, we propose a new paradigm of annotated data expansion named as ChatGenImage. The core idea behind it is to leverage the complementary strengths of diverse models to establish a highly effective and user-friendly pipeline for interactive data augmentation. In this work, we extensively study how LLMs communicate with AIGC model to achieve more controllable image generation and make the first attempt to collaborate them for automatic data augmentation for a variety of downstream tasks. Finally, we present fascinating results obtained from our ChatGenImage framework and demonstrate the powerful potential of our synthetic data for systematic vision adaptation. Our codes are available at https://github.com/Yuqifan1117/Labal-Anything-Pipeline.",43a55dbd95c9d5cd82de8db276f41adeec4a937d,Semantic Scholar,,, gpt takes the bar exam,"['M. Bommarito', 'D. Katz']",http://arxiv.org/pdf/2212.14402,2022-12-29,,"Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as""the Bar Exam,""as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in""AI?""In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.",458147b5f7242c998ec4f33798a59b7c48867329,Semantic Scholar,,, prompts matter insights and strategies for prompt engineering in automated software traceability,"['Alberto D. Rodriguez', 'Katherine R. Dearstyne', 'J. Cleland-Huang']",https://arxiv.org/pdf/2308.00229,2023-08-01,,"Large Language Models (LLMs) have the potential to revolutionize automated traceability by overcoming the challenges faced by previous methods and introducing new possibilities. However, the optimal utilization of LLMs for automated traceability remains unclear. This paper explores the process of prompt engineering to extract link predictions from an LLM. We provide detailed insights into our approach for constructing effective prompts, offering our lessons learned. Additionally, we propose multiple strategies for leveraging LLMs to generate traceability links, improving upon previous zero-shot methods on the ranking of candidate links after prompt refinement. The primary objective of this paper is to inspire and assist future researchers and engineers by highlighting the process of constructing traceability prompts to effectively harness LLMs for advancing automatic traceability.",4591f6cea22b66eccda0103b83002be45e8216b6,Semantic Scholar,,, humans in humans out on gpt converging toward common sense in both success and failure,"['Philipp E. Koralus', ""Vincent Wang-Ma'scianica""]",http://arxiv.org/pdf/2303.17276,2023-03-30,,"Increase in computational scale and fine-tuning has seen a dramatic improvement in the quality of outputs of large language models (LLMs) like GPT. Given that both GPT-3 and GPT-4 were trained on large quantities of human-generated text, we might ask to what extent their outputs reflect patterns of human thinking, both for correct and incorrect cases. The Erotetic Theory of Reason (ETR) provides a symbolic generative model of both human success and failure in thinking, across propositional, quantified, and probabilistic reasoning, as well as decision-making. We presented GPT-3, GPT-3.5, and GPT-4 with 61 central inference and judgment problems from a recent book-length presentation of ETR, consisting of experimentally verified data-points on human judgment and extrapolated data-points predicted by ETR, with correct inference patterns as well as fallacies and framing effects (the ETR61 benchmark). ETR61 includes classics like Wason's card task, illusory inferences, the decoy effect, and opportunity-cost neglect, among others. GPT-3 showed evidence of ETR-predicted outputs for 59% of these examples, rising to 77% in GPT-3.5 and 75% in GPT-4. Remarkably, the production of human-like fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in GPT-4. This suggests that larger and more advanced LLMs may develop a tendency toward more human-like mistakes, as relevant thought patterns are inherent in human-produced training data. According to ETR, the same fundamental patterns are involved both in successful and unsuccessful ordinary reasoning, so that the""bad""cases could paradoxically be learned from the""good""cases. We further present preliminary evidence that ETR-inspired prompt engineering could reduce instances of these mistakes.",45c46687bc8d2dbdea6f92fc14d4dc7a548ddd12,Semantic Scholar,,, large language models are humanlevel prompt engineers,"['Yongchao Zhou', 'Andrei Ioan Muresanu', 'Ziwen Han', 'Keiran Paster', 'Silviu Pitis', 'Harris Chan', 'Jimmy Ba']",http://arxiv.org/pdf/2211.01910,2022-11-03,,"By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the""program,""optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts. Please check out our webpage at https://sites.google.com/view/automatic-prompt-engineer.",4610ffb1b016acaa82a2065ffd1a3adbae1ce722,Semantic Scholar,,, exploring small language models with promptlearning paradigm for efficient domainspecific text classification,"['Hengyu Luo', 'Peng Liu', 'Stefan Esping']",https://arxiv.org/pdf/2309.14779,2023-09-26,,"Domain-specific text classification faces the challenge of scarce labeled data due to the high cost of manual labeling. Prompt-learning, known for its efficiency in few-shot scenarios, is proposed as an alternative to traditional fine-tuning methods. And besides, although large language models (LLMs) have gained prominence, small language models (SLMs, with under 1B parameters) offer significant customizability, adaptability, and cost-effectiveness for domain-specific tasks, given industry constraints. In this study, we investigate the potential of SLMs combined with prompt-learning paradigm for domain-specific text classification, specifically within customer-agent interactions in retail. Our evaluations show that, in few-shot settings when prompt-based model fine-tuning is possible, T5-base, a typical SLM with 220M parameters, achieve approximately 75% accuracy with limited labeled data (up to 15% of full data), which shows great potentials of SLMs with prompt-learning. Based on this, We further validate the effectiveness of active few-shot sampling and the ensemble strategy in the prompt-learning pipeline that contribute to a remarkable performance gain. Besides, in zero-shot settings with a fixed model, we underscore a pivotal observation that, although the GPT-3.5-turbo equipped with around 154B parameters garners an accuracy of 55.16%, the power of well designed prompts becomes evident when the FLAN-T5-large, a model with a mere 0.5% of GPT-3.5-turbo's parameters, achieves an accuracy exceeding 31% with the optimized prompt, a leap from its sub-18% performance with an unoptimized one. Our findings underscore the promise of prompt-learning in classification tasks with SLMs, emphasizing the benefits of active few-shot sampling, and ensemble strategies in few-shot settings, and the importance of prompt engineering in zero-shot settings.",47d04bcfe0f1bed72d03c68cce76b4cf4be03f11,Semantic Scholar,,, prompting is all you need automated android bug replay with large language models,"['Sidong Feng', 'Chunyang Chen']",https://dl.acm.org/doi/pdf/10.1145/3597503.3608137,2023-06-03,,"Bug reports are vital for software maintenance that allow users to inform developers of the problems encountered while using the software. As such, researchers have committed considerable resources toward automating bug replay to expedite the process of software maintenance. Nonetheless, the success of current automated approaches is largely dictated by the characteristics and quality of bug reports, as they are constrained by the limitations of manually-crafted patterns and pre-defined vocabulary lists. Inspired by the success of Large Language Models (LLMs) in natural language understanding, we propose AdbGPT, a new lightweight approach to automatically reproduce the bugs from bug reports through prompt engineering, without any training and hard-coding effort. AdbGPT leverages few-shot learning and chain-of-thought reasoning to elicit human knowledge and logical reasoning from LLMs to accomplish the bug replay in a manner similar to a developer. Our evaluations demonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3% of bug reports in 253.6 seconds, outperforming the state-of-the-art baselines and ablation studies. We also conduct a small-scale user study to confirm the usefulness of AdbGPT in enhancing developers' bug replay capabilities.",48385ded07af641da331c05f6ea3f93694a08425,Semantic Scholar,,, cotbert enhancing unsupervised sentence representation through chainofthought,"['Bowen Zhang', 'Kehua Chang', 'Chunping Li']",https://arxiv.org/pdf/2309.11143,2023-09-20,,"Unsupervised sentence representation learning aims to transform input sentences into fixed-length vectors enriched with intricate semantic information while obviating the reliance on labeled data. Recent progress within this field, propelled by contrastive learning and prompt engineering, has significantly bridged the gap between unsupervised and supervised strategies. Nonetheless, the potential utilization of Chain-of-Thought, remains largely untapped within this trajectory. To unlock latent capabilities within pre-trained models, such as BERT, we propose a two-stage approach for sentence representation: comprehension and summarization. Subsequently, the output of the latter phase is harnessed as the vectorized representation of the input sentence. For further performance enhancement, we meticulously refine both the contrastive learning loss function and the template denoising technique for prompt engineering. Rigorous experimentation substantiates our method, CoT-BERT, transcending a suite of robust baselines without necessitating other text representation models or external databases.",4a99a85f071e67bf15ae4bc53ec37af28b650ec4,Semantic Scholar,,, contextualizing problems to student interests at scale in intelligent tutoring system using large language models,"['Gautam Yadav', 'Ying-Jui Tseng', 'Xiaolin Ni']",http://arxiv.org/pdf/2306.00190,2023-05-31,,"Contextualizing problems to align with student interests can significantly improve learning outcomes. However, this task often presents scalability challenges due to resource and time constraints. Recent advancements in Large Language Models (LLMs) like GPT-4 offer potential solutions to these issues. This study explores the ability of GPT-4 in the contextualization of problems within CTAT, an intelligent tutoring system, aiming to increase student engagement and enhance learning outcomes. Through iterative prompt engineering, we achieved meaningful contextualization that preserved the difficulty and original intent of the problem, thereby not altering values or overcomplicating the questions. While our research highlights the potential of LLMs in educational settings, we acknowledge current limitations, particularly with geometry problems, and emphasize the need for ongoing evaluation and research. Future work includes systematic studies to measure the impact of this tool on students' learning outcomes and enhancements to handle a broader range of problems.",4b6df5f9885c9dc0ce3125791fd01824e3cf37b7,Semantic Scholar,,, backdoor attacks for incontext learning with language models,"['Nikhil Kandpal', 'Matthew Jagielski', 'Florian Tramèr', 'Nicholas Carlini']",https://arxiv.org/pdf/2307.14692,2023-07-27,,"Because state-of-the-art language models are expensive to train, most practitioners must make use of one of the few publicly available language models or language model APIs. This consolidation of trust increases the potency of backdoor attacks, where an adversary tampers with a machine learning model in order to make it perform some malicious behavior on inputs that contain a predefined backdoor trigger. We show that the in-context learning ability of large language models significantly complicates the question of developing backdoor attacks, as a successful backdoor must work against various prompting strategies and should not affect the model's general purpose capabilities. We design a new attack for eliciting targeted misclassification when language models are prompted to perform a particular target task and demonstrate the feasibility of this attack by backdooring multiple large language models ranging in size from 1.3 billion to 6 billion parameters. Finally we study defenses to mitigate the potential harms of our attack: for example, while in the white-box setting we show that fine-tuning models for as few as 500 steps suffices to remove the backdoor behavior, in the black-box setting we are unable to develop a successful defense that relies on prompt engineering alone.",4d21debb0f5fec315181e0912b5105c6ce4fc67f,Semantic Scholar,,, optimizing prompts for texttoimage generation,"['Y. Hao', 'Zewen Chi', 'Li Dong', 'Furu Wei']",http://arxiv.org/pdf/2212.09611,2022-12-19,,"Well-designed prompts can guide text-to-image models to generate amazing images. However, the performant prompts are often model-specific and misaligned with user input. Instead of laborious human engineering, we propose prompt adaptation, a general framework that automatically adapts original user input to model-preferred prompts. Specifically, we first perform supervised fine-tuning with a pretrained language model on a small collection of manually engineered prompts. Then we use reinforcement learning to explore better prompts. We define a reward function that encourages the policy to generate more aesthetically pleasing images while preserving the original user intentions. Experimental results on Stable Diffusion show that our method outperforms manual prompt engineering in terms of both automatic metrics and human preference ratings. Moreover, reinforcement learning further boosts performance, especially on out-of-domain prompts. The pretrained checkpoints are available at https://aka.ms/promptist. The demo can be found at https://aka.ms/promptist-demo.",4d81c33b295c092016ac236cfd32020a5bb70b97,Semantic Scholar,,, is gpt a computational model of emotion detailed analysis,"['Ala Nekouvaght Tak', 'J. Gratch']",https://arxiv.org/pdf/2307.13779,2023-07-25,,"This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.",4dd461b2392a6983d36618744d2384349c4170f9,Semantic Scholar,,, a lightweight framework for highquality code generation,"['Mohammed Latif Siddiq', 'B.K. Casey', 'Joanna C. S. Santos']",https://arxiv.org/pdf/2307.08220,2023-07-17,,"In recent years, the use of automated source code generation utilizing transformer-based generative models has expanded, and these models can generate functional code according to the requirements of the developers. However, recent research revealed that these automatically generated source codes can contain vulnerabilities and other quality issues. Despite researchers' and practitioners' attempts to enhance code generation models, retraining and fine-tuning large language models is time-consuming and resource-intensive. Thus, we describe FRANC, a lightweight framework for recommending more secure and high-quality source code derived from transformer-based code generation models. FRANC includes a static filter to make the generated code compilable with heuristics and a quality-aware ranker to sort the code snippets based on a quality score. Moreover, the framework uses prompt engineering to fix persistent quality issues. We evaluated the framework with five Python and Java code generation models and six prompt datasets, including a newly created one in this work (SOEval). The static filter improves 9% to 46% Java suggestions and 10% to 43% Python suggestions regarding compilability. The average improvement over the NDCG@10 score for the ranking system is 0.0763, and the repairing techniques repair the highest 80% of prompts. FRANC takes, on average, 1.98 seconds for Java; for Python, it takes 0.08 seconds.",4e96d7fa9f27857523d786230294fbcc6060212c,Semantic Scholar,,, llms killed the script kiddie how agents supported by large language models change the landscape of network threat testing,"['Stephen Moskal', 'Sam Laney', 'Erik Hemberg', 'Una-May O’Reilly']",https://arxiv.org/pdf/2310.06936,2023-10-11,,"In this paper, we explore the potential of Large Language Models (LLMs) to reason about threats, generate information about tools, and automate cyber campaigns. We begin with a manual exploration of LLMs in supporting specific threat-related actions and decisions. We proceed by automating the decision process in a cyber campaign. We present prompt engineering approaches for a plan-act-report loop for one action of a threat campaign and and a prompt chaining design that directs the sequential decision process of a multi-action campaign. We assess the extent of LLM's cyber-specific knowledge w.r.t the short campaign we demonstrate and provide insights into prompt design for eliciting actionable responses. We discuss the potential impact of LLMs on the threat landscape and the ethical considerations of using LLMs for accelerating threat actor capabilities. We report a promising, yet concerning, application of generative AI to cyber threats. However, the LLM's capabilities to deal with more complex networks, sophisticated vulnerabilities, and the sensitivity of prompts are open questions. This research should spur deliberations over the inevitable advancements in LLM-supported cyber adversarial landscape.",50aaac5fdc2b5a33bfd3ba93cdf4e5e302f34297,Semantic Scholar,,, zeroshot nuclei detection via visuallanguage pretrained models,"['Yongjian Wu', 'Yangqiaoyu Zhou', 'Jiya Saiyin', 'Bingzheng Wei', 'Maode Lai', 'Jianzhong Shou', 'Yubo Fan', 'Yan Xu']",http://arxiv.org/pdf/2306.17659,2023-06-30,,"Large-scale visual-language pre-trained models (VLPM) have proven their excellent performance in downstream object detection for natural scenes. However, zero-shot nuclei detection on H\&E images via VLPMs remains underexplored. The large gap between medical images and the web-originated text-image pairs used for pre-training makes it a challenging task. In this paper, we attempt to explore the potential of the object-level VLPM, Grounded Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection. Concretely, an automatic prompts design pipeline is devised based on the association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding empirical manual prompts engineering. We further establish a self-training framework, using the automatically designed prompts to generate the preliminary results as pseudo labels from GLIP and refine the predicted boxes in an iterative manner. Our method achieves a remarkable performance for label-free nuclei detection, surpassing other comparison methods. Foremost, our work demonstrates that the VLPM pre-trained on natural image-text pairs exhibits astonishing potential for downstream tasks in the medical field as well. Code will be released at https://github.com/wuyongjianCODE/VLPMNuD.",50bbca86de82d6b72d92bba0ec988b58e644dac3,Semantic Scholar,,, gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench,"['A. Alam', 'P. Roy', 'Farouq Al-Omari', 'C. Roy', 'B. Roy', 'Kevin A. Schneider']",https://arxiv.org/pdf/2308.13963,2023-08-26,,"With the emergence of Machine Learning, there has been a surge in leveraging its capabilities for problem-solving across various domains. In the code clone realm, the identification of type-4 or semantic clones has emerged as a crucial yet challenging task. Researchers aim to utilize Machine Learning to tackle this challenge, often relying on the Big-CloneBench dataset. However, it’s worth noting that BigCloneBench, originally not designed for semantic clone detection, presents several limitations that hinder its suitability as a comprehensive training dataset for this specific purpose. Furthermore, CLCDSA dataset suffers from a lack of reusable examples aligning with real-world software systems, rendering it inadequate for cross-language clone detection approaches. In this work, we present a comprehensive semantic clone and cross-language clone benchmark, GPTCloneBench 1 by exploiting SemanticCloneBench and OpenAI’s GPT-3 model. In particular, using code fragments from SemanticCloneBench as sample inputs along with appropriate prompt engineering for GPT-3 model, we generate semantic and cross-language clones for these specific fragments and then conduct a combination of extensive manual analysis, tool-assisted filtering, functionality testing and automated validation in building the benchmark. From 79,928 clone pairs of GPT-3 output, we created a benchmark with 37,149 true semantic clone pairs, 19,288 false semantic pairs(Type-1/Type-2), and 20,770 cross-language clones across four languages (Java, C, C#, and Python). Our benchmark is 15-fold larger than SemanticCloneBench, has more functional code examples for software systems and programming language support than CLCDSA, and overcomes BigCloneBench’s qualities, quantification, and language variety limitations. GPTCloneBench can be found here1.",50d40d05598e456188a3be42983b8daabd3f04f7,Semantic Scholar,,, symbolic knowledge distillation from general language models to commonsense models,"['Peter West', 'Chandrasekhar Bhagavatula', 'Jack Hessel', 'Jena D. Hwang', 'Liwei Jiang', 'Ronan Le Bras', 'Ximing Lu', 'S. Welleck', 'Yejin Choi']",https://aclanthology.org/2022.naacl-main.341.pdf,2021-10-14,,"The common practice for training commonsense models has gone from–human–to–corpus–to–machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from–machine–to–corpus–to–machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al. 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically–as text–in addition to the neural model. We distill only one aspect–the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model’s commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and will share our new symbolic knowledge graph and commonsense models.",521ccc898395a2818fced22b4cf371b0e5121f94,Semantic Scholar,,, can prompt learning benefit radiology report generation,"['Jun Wang', 'Lixing Zhu', 'A. Bhalerao', 'Yulan He']",https://arxiv.org/pdf/2308.16269,2023-08-30,,"Radiology report generation aims to automatically provide clinically meaningful descriptions of radiology images such as MRI and X-ray. Although great success has been achieved in natural scene image captioning tasks, radiology report generation remains challenging and requires prior medical knowledge. In this paper, we propose PromptRRG, a method that utilizes prompt learning to activate a pretrained model and incorporate prior knowledge. Since prompt learning for radiology report generation has not been explored before, we begin with investigating prompt designs and categorise them based on varying levels of knowledge: common, domain-specific and disease-enriched prompts. Additionally, we propose an automatic prompt learning mechanism to alleviate the burden of manual prompt engineering. This is the first work to systematically examine the effectiveness of prompt learning for radiology report generation. Experimental results on the largest radiology report generation benchmark, MIMIC-CXR, demonstrate that our proposed method achieves state-of-the-art performance. Code will be available upon the acceptance.",531678c18fd2c5a9620b68f3550131fc3fd3636c,Semantic Scholar,,, just tell me prompt engineering in business process management,"['Kiran Busch', 'Alexander Rochlitzer', 'Diana Sola', 'H. Leopold']",http://arxiv.org/pdf/2304.07183,2023-04-14,,"GPT-3 and several other language models (LMs) can effectively address various natural language processing (NLP) tasks, including machine translation and text summarization. Recently, they have also been successfully employed in the business process management (BPM) domain, e.g., for predictive process monitoring and process extraction from text. This, however, typically requires fine-tuning the employed LM, which, among others, necessitates large amounts of suitable training data. A possible solution to this problem is the use of prompt engineering, which leverages pre-trained LMs without fine-tuning them. Recognizing this, we argue that prompt engineering can help bring the capabilities of LMs to BPM research. We use this position paper to develop a research agenda for the use of prompt engineering for BPM research by identifying the associated potentials and challenges.",53e7475a3ed0caee37122a9dbdb53d1da0691a33,Semantic Scholar,,, prompt position really matters in fewshot and zeroshot nlu tasks,"['Junyu Mao', 'S. Middleton', 'M. Niranjan']",https://arxiv.org/pdf/2305.14493,,,"Prompt-based models have made remarkable advancements in the fields of zero-shot and few-shot learning, attracting a lot of attention from researchers. Developing an effective prompt template plays a critical role. However, prior studies have mainly focused on prompt vocabulary selection or embedding initialization with the reserved prompt position fixed. In this empirical study, we conduct the most comprehensive analysis to date of prompt position option for natural language understanding tasks. Our findings quantify the substantial impact prompt position has on model performance. We observe that the prompt position used in prior studies is often sub-optimal for both zero-shot and few-shot settings. These findings suggest prompt position optimisation as an interesting research direction alongside the existing focus on prompt engineering.",56a9c96a29f4047be8465244576d731f0df2d9df,Semantic Scholar,,, situated natural language explanations,"['Zining Zhu', 'Hao Jiang', 'Jingfeng Yang', 'Sreyashi Nag', 'Chao Zhang', 'Jie Huang', 'Yifan Gao', 'Frank Rudzicz', 'Bing Yin']",https://arxiv.org/pdf/2308.14115,2023-08-27,,"Natural language is among the most accessible tools for explaining decisions to humans, and large pretrained language models (PLMs) have demonstrated impressive abilities to generate coherent natural language explanations (NLE). The existing NLE research perspectives do not take the audience into account. An NLE can have high textual quality, but it might not accommodate audiences' needs and preference. To address this limitation, we propose an alternative perspective, situated NLE, including a situated generation framework and a situated evaluation framework. On the generation side, we propose simple prompt engineering methods that adapt the NLEs to situations. In human studies, the annotators preferred the situated NLEs. On the evaluation side, we set up automated evaluation scores in lexical, semantic, and pragmatic categories. The scores can be used to select the most suitable prompts to generate NLEs. Situated NLE provides a perspective to conduct further research on automatic NLE generations.",57404bd8c71e2b17fce63b49229b278b6a66bf13,Semantic Scholar,,, what's the magic word a control theory of llm prompting,"['Aman Bhargava', 'Cameron Witkowski', 'Manav Shah', 'Matt W. Thomson']",https://arxiv.org/pdf/2310.04444,2023-10-02,,"Prompt engineering is crucial for deploying LLMs but is poorly understood mathematically. We formalize LLM systems as a class of discrete stochastic dynamical systems to explore prompt engineering through the lens of control theory. We investigate the reachable set of output token sequences $R_y(\mathbf x_0)$ for which there exists a control input sequence $\mathbf u$ for each $\mathbf y \in R_y(\mathbf x_0)$ that steers the LLM to output $\mathbf y$ from initial state sequence $\mathbf x_0$. We offer analytic analysis on the limitations on the controllability of self-attention in terms of reachable set, where we prove an upper bound on the reachable set of outputs $R_y(\mathbf x_0)$ as a function of the singular values of the parameter matrices. We present complementary empirical analysis on the controllability of a panel of LLMs, including Falcon-7b, Llama-7b, and Falcon-40b. Our results demonstrate a lower bound on the reachable set of outputs $R_y(\mathbf x_0)$ w.r.t. initial state sequences $\mathbf x_0$ sampled from the Wikitext dataset. We find that the correct next Wikitext token following sequence $\mathbf x_0$ is reachable over 97% of the time with prompts of $k\leq 10$ tokens. We also establish that the top 75 most likely next tokens, as estimated by the LLM itself, are reachable at least 85% of the time with prompts of $k\leq 10$ tokens. Intriguingly, short prompt sequences can dramatically alter the likelihood of specific outputs, even making the least likely tokens become the most likely ones. This control-centric analysis of LLMs demonstrates the significant and poorly understood role of input sequences in steering output probabilities, offering a foundational perspective for enhancing language model system capabilities.",57a4f8f69908d3474565d3cd6f58b1ca651ff673,Semantic Scholar,,, red teaming language models with language models,"['Ethan Perez', 'Saffron Huang', 'Francis Song', 'Trevor Cai', 'Roman Ring', 'John Aslanides', 'A. Glaese', 'Nathan McAleese', 'G. Irving']",https://aclanthology.org/2022.emnlp-main.225.pdf,2022-02-07,,"Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases. However, human annotation is expensive, limiting the number and diversity of test cases. In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases (“red teaming”) using another LM. We evaluate the target LM’s replies to generated test questions using a classifier trained to detect offensive content, uncovering tens of thousands of offensive replies in a 280B parameter LM chatbot. We explore several methods, from zero-shot generation to reinforcement learning, for generating test cases with varying levels of diversity and difficulty. Furthermore, we use prompt engineering to control LM-generated test cases to uncover a variety of other harms, automatically finding groups of people that the chatbot discusses in offensive ways, personal and hospital phone numbers generated as the chatbot’s own contact info, leakage of private training data in generated text, and harms that occur over the course of a conversation. Overall, LM-based red teaming is one promising tool (among many needed) for finding and fixing diverse, undesirable LM behaviors before impacting users.",5d49c7401c5f2337c4cc88d243ae39ed659afe64,Semantic Scholar,,, towards interpretable mental health analysis with large language models,"['Kailai Yang', 'Shaoxiong Ji', 'Tianlin Zhang', 'Qianqian Xie', 'Zi-Zhou Kuang', 'Sophia Ananiadou']",https://aclanthology.org/2023.emnlp-main.370.pdf,2023-04-07,,"The latest large language models (LLMs) such as ChatGPT, exhibit strong capabilities in automated mental health analysis. However, existing relevant studies bear several limitations, including inadequate evaluations, lack of prompting strategies, and ignorance of exploring LLMs for explainability. To bridge these gaps, we comprehensively evaluate the mental health analysis and emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore the effects of different prompting strategies with unsupervised and distantly supervised emotional information. Based on these prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions. We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations. We benchmark existing automatic evaluation metrics on this dataset to guide future related works. According to the results, ChatGPT shows strong in-context learning ability but still has a significant gap with advanced task-specific methods. Careful prompt engineering with emotional cues and expert-written few-shot examples can also effectively improve performance on mental health analysis. In addition, ChatGPT generates explanations that approach human performance, showing its great potential in explainable mental health analysis.",5d879530c443dd06d3686f31d32cfe34c7ade9bc,Semantic Scholar,,, trash to treasure using texttoimage models to inform the design of physical artefacts,"['Amy Smith', 'Hope Schroeder', 'Ziv Epstein', 'Michael Cook', 'S. Colton', 'A. Lippman']",http://arxiv.org/pdf/2302.00561,2023-02-01,,"Text-to-image generative models have recently exploded in popularity and accessibility. Yet so far, use of these models in creative tasks that bridge the 2D digital world and the creation of physical artefacts has been understudied. We conduct a pilot study to investigate if and how text-to-image models can be used to assist in upstream tasks within the creative process, such as ideation and visualization, prior to a sculpture-making activity. Thirty participants selected sculpture-making materials and generated three images using the Stable Diffusion text-to-image generator, each with text prompts of their choice, with the aim of informing and then creating a physical sculpture. The majority of participants (23/30) reported that the generated images informed their sculptures, and 28/30 reported interest in using text-to-image models to help them in a creative task in the future. We identify several prompt engineering strategies and find that a participant's prompting strategy relates to their stage in the creative process. We discuss how our findings can inform support for users at different stages of the design process and for using text-to-image models for physical artefact design.",5de60d53bce194b34dae1e531876af9acffba1a3,Semantic Scholar,,, knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms,"['Jiaoayan Chen', 'Luyi Ma', 'Xiaohan Li', 'Nikhil Thakurdesai', 'Jianpeng Xu', 'Jason H. D. Cho', 'Kaushiki Nag', 'Evren Korpeoglu', 'Sushant Kumar', 'Kannan Achan']",http://arxiv.org/pdf/2305.09858,2023-05-17,,"Knowledge Graphs (KGs) play a crucial role in enhancing e-commerce system performance by providing structured information about entities and their relationships, such as complementary or substitutable relations between products or product types, which can be utilized in recommender systems. However, relation labeling in KGs remains a challenging task due to the dynamic nature of e-commerce domains and the associated cost of human labor. Recently, breakthroughs in Large Language Models (LLMs) have shown surprising results in numerous natural language processing tasks. In this paper, we conduct an empirical study of LLMs for relation labeling in e-commerce KGs, investigating their powerful learning capabilities in natural language and effectiveness in predicting relations between product types with limited labeled data. We evaluate various LLMs, including PaLM and GPT-3.5, on benchmark datasets, demonstrating their ability to achieve competitive performance compared to humans on relation labeling tasks using just 1 to 5 labeled examples per relation. Additionally, we experiment with different prompt engineering techniques to examine their impact on model performance. Our results show that LLMs significantly outperform existing KG completion models in relation labeling for e-commerce KGs and exhibit performance strong enough to replace human labeling.",5e8dd82419f78025093acbec3ba2e345fff85d11,Semantic Scholar,,, responsible task automation empowering large language models as responsible task automators,"['Zhizheng Zhang', 'Xiaoyi Zhang', 'Wenxuan Xie', 'Yan Lu']",http://arxiv.org/pdf/2306.01242,2023-06-02,,"The recent success of Large Language Models (LLMs) signifies an impressive stride towards artificial general intelligence. They have shown a promising prospect in automatically completing tasks upon user instructions, functioning as brain-like coordinators. The associated risks will be revealed as we delegate an increasing number of tasks to machines for automated completion. A big question emerges: how can we make machines behave responsibly when helping humans automate tasks as personal copilots? In this paper, we explore this question in depth from the perspectives of feasibility, completeness and security. In specific, we present Responsible Task Automation (ResponsibleTA) as a fundamental framework to facilitate responsible collaboration between LLM-based coordinators and executors for task automation with three empowered capabilities: 1) predicting the feasibility of the commands for executors; 2) verifying the completeness of executors; 3) enhancing the security (e.g., the protection of users' privacy). We further propose and compare two paradigms for implementing the first two capabilities. One is to leverage the generic knowledge of LLMs themselves via prompt engineering while the other is to adopt domain-specific learnable models. Moreover, we introduce a local memory mechanism for achieving the third capability. We evaluate our proposed ResponsibleTA on UI task automation and hope it could bring more attentions to ensuring LLMs more responsible in diverse scenarios.",615962d8969c8e0ffe43319689dce6c50cbf1f29,Semantic Scholar,,, peace prompt engineering automation for clipseg enhancement in aerial robotics,"['Haechan Mark Bong', 'Rongge Zhang', 'Ricardo de Azambuja', 'Giovanni Beltrame']",https://arxiv.org/pdf/2310.00085,2023-09-29,,"From industrial to space robotics, safe landing is an essential component for flight operations. With the growing interest in artificial intelligence, we direct our attention to learning based safe landing approaches. This paper extends our previous work, DOVESEI, which focused on a reactive UAV system by harnessing the capabilities of open vocabulary image segmentation. Prompt-based safe landing zone segmentation using an open vocabulary based model is no more just an idea, but proven to be feasible by the work of DOVESEI. However, a heuristic selection of words for prompt is not a reliable solution since it cannot take the changing environment into consideration and detrimental consequences can occur if the observed environment is not well represented by the given prompt. Therefore, we introduce PEACE (Prompt Engineering Automation for CLIPSeg Enhancement), powering DOVESEI to automate the prompt generation and engineering to adapt to data distribution shifts. Our system is capable of performing safe landing operations with collision avoidance at altitudes as low as 20 meters using only monocular cameras and image segmentation. We take advantage of DOVESEI's dynamic focus to circumvent abrupt fluctuations in the terrain segmentation between frames in a video stream. PEACE shows promising improvements in prompt generation and engineering for aerial images compared to the standard prompt used for CLIP and CLIPSeg. Combining DOVESEI and PEACE, our system was able improve successful safe landing zone selections by 58.62% compared to using only DOVESEI. All the source code is open source and available online.",615ef4518f9a41a10881b66ce10f0eb490e2d75c,Semantic Scholar,,, datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation,"['Seugnjun Lee', 'Hyeonseok Moon', 'Chanjun Park', 'Heu-Jeoung Lim']",http://arxiv.org/pdf/2306.14514,2023-06-26,,"In this paper, we introduce a data-driven approach for Formality-Sensitive Machine Translation (FSMT) that caters to the unique linguistic properties of four target languages. Our methodology centers on two core strategies: 1) language-specific data handling, and 2) synthetic data generation using large-scale language models and empirical prompt engineering. This approach demonstrates a considerable improvement over the baseline, highlighting the effectiveness of data-centric techniques. Our prompt engineering strategy further improves performance by producing superior synthetic translation examples.",632dc69c2e504d693533fc434b8122a2a8a42844,Semantic Scholar,,, forgetful large language models lessons learned from using llms in robot programming,"['Juo-Tung Chen', 'Chien-Ming Huang']",https://arxiv.org/pdf/2310.06646,2023-10-10,,"Large language models offer new ways of empowering people to program robot applications-namely, code generation via prompting. However, the code generated by LLMs is susceptible to errors. This work reports a preliminary exploration that empirically characterizes common errors produced by LLMs in robot programming. We categorize these errors into two phases: interpretation and execution. In this work, we focus on errors in execution and observe that they are caused by LLMs being “forgetful” of key information provided in user prompts. Based on this observation, we propose prompt engineering tactics designed to reduce errors in execution. We then demonstrate the effectiveness of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. Finally, we discuss lessons learned from using LLMs in robot programming and call for the benchmarking of LLM-powered end-user development of robot applications.",6474370fe46e38896288305c35d3058a403b1db2,Semantic Scholar,,, benchmarking causal study to interpret large language models for source code,"['Daniel Rodríguez-Cárdenas', 'David N. Palacio', 'Dipin Khati', 'Henry Burke', 'D. Poshyvanyk']",https://arxiv.org/pdf/2308.12415,2023-08-23,,"One of the most common solutions adopted by software researchers to address code generation is by training Large Language Models (LLMs) on massive amounts of source code. LLMs are rooted in the concept of emergent capabilities in which machines statistically learn complex patterns from code data. Although a number of studies have shown that LLMs have been effectively evaluated on popular accuracy metrics (e.g., BLEU, CodeBleu), previous research has largely overlooked the role of Causal Inference as a fundamental component of the interpretability of LLMs’ performance. Existing benchmarks and datasets are meant to highlight the difference between the expected and the generated outcome, but do not take into account confounding variables (e.g., lines of code, number of tokens, prompt size) that equally influence the accuracy metrics. The fact remains that, when dealing with generative software tasks by LLMs, no benchmark is available to tell researchers how to quantify neither the causal effect of SE-based treatments nor the correlation of confounders to the model’s performance. In an effort to bring statistical rigor to the evaluation of LLMs, this paper introduces a benchmarking strategy named Galeras comprised of curated testbeds for three SE tasks (i.e., code completion, code summarization, and commit generation) to help aid the interpretation of LLMs’ performance.We illustrate the insights of our benchmarking strategy by conducting a case study on the performance of ChatGPT under distinct prompt engineering methods. The results of the case study demonstrate the positive causal influence of prompt semantics on ChatGPT’s generative performance by an average treatment effect of ≈ 3%. Moreover, it was found that confounders such as prompt size are highly correlated with accuracy metrics (≈ 0.412). The end result of our case study is to showcase causal inference evaluations, in practice, to reduce confounding bias. By reducing the bias, we offer an interpretable solution for the accuracy metric under analysis.",6634e56c1046f3d16eaadecac45d5576d93eee83,Semantic Scholar,,, transfer learning for power outage detection task with limited training data,['Olukunle O. Owolabi'],http://arxiv.org/pdf/2305.17817,2023-05-28,,"Early detection of power outages is crucial for maintaining a reliable power distribution system. This research investigates the use of transfer learning and language models in detecting outages with limited labeled data. By leveraging pretraining and transfer learning, models can generalize to unseen classes. Using a curated balanced dataset of social media tweets related to power outages, we conducted experiments using zero-shot and few-shot learning. Our hypothesis is that Language Models pretrained with limited data could achieve high performance in outage detection tasks over baseline models. Results show that while classical models outperform zero-shot Language Models, few-shot fine-tuning significantly improves their performance. For example, with 10% fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5% accuracy (+8.5%). This has practical implications for analyzing and localizing outages in scenarios with limited data availability. Our evaluation provides insights into the potential of few-shot fine-tuning with Language Models for power outage detection, highlighting their strengths and limitations. This research contributes to the knowledge base of leveraging advanced natural language processing techniques for managing critical infrastructure.",05fab50acb26203a944a955131a2388c9731a8f7,Semantic Scholar,,, distillation of encoderdecoder transformers for sequence labelling,"['M. Farina', 'D. Pappadopulo', 'Anant Gupta', 'Leslie Huang', 'Ozan Irsoy', 'T. Solorio']",http://arxiv.org/pdf/2302.05454,2023-02-10,,"Driven by encouraging results on a wide range of tasks, the field of NLP is experiencing an accelerated race to develop bigger language models. This race for bigger models has also underscored the need to continue the pursuit of practical distillation approaches that can leverage the knowledge acquired by these big models in a compute-efficient manner. Having this goal in mind, we build on recent work to propose a hallucination-free framework for sequence tagging that is especially suited for distillation. We show empirical results of new state-of-the-art performance across multiple sequence labelling datasets and validate the usefulness of this framework for distilling a large model in a few-shot learning scenario.",0704a96e1c57c12031f1c3ca492a91dbed1f61ce,Semantic Scholar,,, technical report competition solution for prompt tuning using pretrained language model,"['Jiang-Long Song', 'Wuhe Zou', 'Feng Li', 'Xiaolei Qin', 'Weidong Zhang']",http://arxiv.org/pdf/2212.06369,2022-12-13,,"Prompt tuning recently becomes a hot-spot in the applications of large pretrained language models on specific downstream tasks. Regarding the Language Model as a Service (LMaaS), black-box tuning using derivative-free optimization (DFO) provides a novel approach to expand the practical scenarios of pretrained models and enrich the researches of few-shot learning. In this report, we present our solution in this competition that is based on the LMaaS scenario. Our solution consists of several modifications to BBTv2, including multiple label words, selection of P0, rolling update strategy, multi-task loss from MLP classifier, and finally using the ensemble method to further improve generalization ability. We also shared some strategies that we tried but didn't use in the final submission for further discussion. In the end we raised a question about the SNLI dataset and the impact on the results, as well as our concerns about the competition.",075e16a0774b1a9d44a7d512c50b7f997e16befe,Semantic Scholar,,, exploiting the potential of seq2seq models as robust fewshot learners,"['Jihyeon Janel Lee', 'Dain Kim', 'Doohae Jung', 'Boseop Kim', 'Kyoung-Woon On']",https://arxiv.org/pdf/2307.14856,2023-07-27,,"In-context learning, which offers substantial advantages over fine-tuning, is predominantly observed in decoder-only models, while encoder-decoder (i.e., seq2seq) models excel in methods that rely on weight updates. Recently, a few studies have demonstrated the feasibility of few-shot learning with seq2seq models; however, this has been limited to tasks that align well with the seq2seq architecture, such as summarization and translation. Inspired by these initial studies, we provide a first-ever extensive experiment comparing the in-context few-shot learning capabilities of decoder-only and encoder-decoder models on a broad range of tasks. Furthermore, we propose two methods to more effectively elicit in-context learning ability in seq2seq models: objective-aligned prompting and a fusion-based approach. Remarkably, our approach outperforms a decoder-only model that is six times larger and exhibits significant performance improvements compared to conventional seq2seq models across a variety of settings. We posit that, with the right configuration and prompt design, seq2seq models can be highly effective few-shot learners for a wide spectrum of applications.",07bc02bd16f6fe78a7ea3bb8d966fcc6e3893195,Semantic Scholar,,, cohortgpt an enhanced gpt for participant recruitment in clinical study,"['Zihan Guan', 'Zihao Wu', 'Zheng Liu', 'Dufan Wu', 'Hui Ren', 'Quanzheng Li', 'Xiang Li', 'Ninghao Liu']",https://arxiv.org/pdf/2307.11346,2023-07-21,,"Participant recruitment based on unstructured medical texts such as clinical notes and radiology reports has been a challenging yet important task for the cohort establishment in clinical research. Recently, Large Language Models (LLMs) such as ChatGPT have achieved tremendous success in various downstream tasks thanks to their promising performance in language understanding, inference, and generation. It is then natural to test their feasibility in solving the cohort recruitment task, which involves the classification of a given paragraph of medical text into disease label(s). However, when applied to knowledge-intensive problem settings such as medical text classification, where the LLMs are expected to understand the decision made by human experts and accurately identify the implied disease labels, the LLMs show a mediocre performance. A possible explanation is that, by only using the medical text, the LLMs neglect to use the rich context of additional information that languages afford. To this end, we propose to use a knowledge graph as auxiliary information to guide the LLMs in making predictions. Moreover, to further boost the LLMs adapt to the problem setting, we apply a chain-of-thought (CoT) sample selection strategy enhanced by reinforcement learning, which selects a set of CoT samples given each individual medical report. Experimental results and various ablation studies show that our few-shot learning method achieves satisfactory performance compared with fine-tuning strategies and gains superb advantages when the available data is limited. The code and sample dataset of the proposed CohortGPT model is available at: https://anonymous.4open.science/r/CohortGPT-4872/",089f6328085066263fedc083952624ca121ebbf3,Semantic Scholar,,, zicl zeroshot incontext learning with pseudodemonstrations,"['Xinxi Lyu', 'Sewon Min', 'Iz Beltagy', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2212.09865,2022-12-19,,"Although large language models can be prompted for both zero- and few-shot learning, performance drops significantly when no demonstrations are available. In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap by constructing pseudo-demonstrations for a given test input using a raw text corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the nearest neighbors to the test input from the corpus and pairing them with random task labels, and (2) applying a set of techniques to reduce the amount of direct copying the model does from the resulting demonstrations. Evaluation on nine classification datasets shows that Z-ICL outperforms previous zero-shot methods by a significant margin, and is on par with in-context learning with labeled training data in the few-shot setting. Overall, Z-ICL provides a significantly higher estimate of the zero-shot performance levels of a model, and supports future efforts to develop better pseudo-demonstrations that further improve zero-shot results.",0942bd8fad71282994ff4e9a779c09745da68edc,Semantic Scholar,,, zeroshot and fewshot learning for lung cancer multilabel classification using vision transformer,"['F. Guo', 'Yingfang Fan']",https://arxiv.org/pdf/2205.15290,2022-05-30,,"Lung cancer is the leading cause of cancer-related death worldwide. Lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is an essential tool for lung cancer diagnosis. Pathologists make classifications according to the dominant subtypes. Although morphology remains the standard for diagnosis, significant tool needs to be developed to elucidate the diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT) model to classify multiple label lung cancer on histologic slices (from dataset LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, sensitivity and specificity. Our study show that the pre-trained ViT model has a good performance in Zero-Shot setting, a competitive accuracy ($99.87\%$) in Few-Shot setting ({epoch = 1}) and an optimal result ($100.00\%$ on both validation set and test set) in Few-Shot seeting ({epoch = 5}).",0953ada119f384f328b6102e6b7963b3bde7cc9e,Semantic Scholar,,, unified vision and language prompt learning,"['Yuhang Zang', 'Wei Li', 'Kaiyang Zhou', 'Chen Huang', 'Chen Change Loy']",http://arxiv.org/pdf/2210.07225,2022-10-13,,"Prompt tuning, a parameter- and data-efficient transfer learning paradigm that tunes only a small number of parameters in a model's input space, has become a trend in the vision community since the emergence of large vision-language models like CLIP. We present a systematic study on two representative prompt tuning methods, namely text prompt tuning and visual prompt tuning. A major finding is that none of the unimodal prompt tuning methods performs consistently well: text prompt tuning fails on data with high intra-class visual variances while visual prompt tuning cannot handle low inter-class variances. To combine the best from both worlds, we propose a simple approach called Unified Prompt Tuning (UPT), which essentially learns a tiny neural network to jointly optimize prompts across different modalities. Extensive experiments on over 11 vision datasets show that UPT achieves a better trade-off than the unimodal counterparts on few-shot learning benchmarks, as well as on domain generalization benchmarks. Code and models will be released to facilitate future research.",09b7338021fff3200c2098b19824aecc83a66cb5,Semantic Scholar,,, plugandplay multilingual fewshot spoken words recognition,"['Aaqib Saeed', 'Vasileios Tsouvalas']",http://arxiv.org/pdf/2305.03058,2023-05-03,,"As technology advances and digital devices become prevalent, seamless human-machine communication is increasingly gaining significance. The growing adoption of mobile, wearable, and other Internet of Things (IoT) devices has changed how we interact with these smart devices, making accurate spoken words recognition a crucial component for effective interaction. However, building robust spoken words detection system that can handle novel keywords remains challenging, especially for low-resource languages with limited training data. Here, we propose PLiX, a multilingual and plug-and-play keyword spotting system that leverages few-shot learning to harness massive real-world data and enable the recognition of unseen spoken words at test-time. Our few-shot deep models are learned with millions of one-second audio clips across 20 languages, achieving state-of-the-art performance while being highly efficient. Extensive evaluations show that PLiX can generalize to novel spoken words given as few as just one support example and performs well on unseen languages out of the box. We release models and inference code to serve as a foundation for future research and voice-enabled user interface development for emerging devices.",0b413633f14ec7f96948067abf1d4ca930fa38a1,Semantic Scholar,,, zeroshot approach to overcome perturbation sensitivity of prompts,"['Mohna Chakraborty', 'Adithya Kulkarni', 'Qi Li']",http://arxiv.org/pdf/2305.15689,2023-05-25,,"Recent studies have demonstrated that natural-language prompts can help to leverage the knowledge learned by pre-trained language models for the binary sentence-level sentiment classification task. Specifically, these methods utilize few-shot learning settings to fine-tune the sentiment classification model using manual or automatically generated prompts. However, the performance of these methods is sensitive to the perturbations of the utilized prompts. Furthermore, these methods depend on a few labeled instances for automatic prompt generation and prompt ranking. This study aims to find high-quality prompts for the given task in a zero-shot setting. Given a base prompt, our proposed approach automatically generates multiple prompts similar to the base prompt employing positional, reasoning, and paraphrasing techniques and then ranks the prompts using a novel metric. We empirically demonstrate that the top-ranked prompts are high-quality and significantly outperform the base prompt and the prompts generated using few-shot learning for the binary sentence-level sentiment classification task.",0b71af0bf02ab58b8d8e342c1c803322cfede603,Semantic Scholar,,, templatefree prompt tuning for fewshot ner,"['Ruotian Ma', 'Xin Zhou', 'Tao Gui', 'Y. Tan', 'Qi Zhang', 'Xuanjing Huang']",https://aclanthology.org/2022.naacl-main.420.pdf,2021-09-28,,"Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding the complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot settings. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.",1dd344ce28f1e5a078f9d8396b5078388e555d99,Semantic Scholar,,, a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems,"['Debjoy Saha', 'Bishal Santra', 'Pawan Goyal']",http://arxiv.org/pdf/2204.08167,2022-04-18,,"We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented conversational systems. Recent approaches to this problem leveraging Transformer-based models have yielded great results. However, training these models is expensive, both in terms of computational resources and time. Additionally, collecting high quality annotated dialogue datasets remains a challenge for researchers because of the extensive annotation required for training these models. Driven by the recent success of pre-trained language models and prompt-based learning, we explore prompt-based few-shot learning for Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage prompt-based language modelling task and train language models for both tasks and present a comprehensive empirical analysis of their separate and joint performance. We demonstrate the potential of prompt-based methods in few-shot learning for DST and provide directions for future improvement.",21e46f11898748778a31b5b2fe2f60128eb66ba1,Semantic Scholar,,, stabilized incontext learning with pretrained language models for few shot dialogue state tracking,"['Derek Chen', 'Kun Qian', 'Zhou Yu']",http://arxiv.org/pdf/2302.05932,2023-02-12,,"Prompt-based methods with large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks. These models improve even further with the addition of a few labeled in-context exemplars to guide output generation. However, for more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial, leading to unstable results. Furthermore, building in-context exemplars for dialogue tasks is difficult because conversational contexts are long while model input lengths are relatively short.To overcome these issues we first adapt a meta-learning scheme to the dialogue domain which stabilizes the ability of the model to perform well under various prompts. We additionally design a novel training method to improve upon vanilla retrieval mechanisms to find ideal in-context examples. Finally, we introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query. In effect, we are able to achieve highly competitive results for few-shot DST on MultiWOZ.",59ef1b67c5f238d5d6d175d84fb6b239b4221a97,Semantic Scholar,,, steps towards promptbased creation of virtual worlds,"['Jasmine Roberts', 'Andrzej Banburski-Fahey', 'J. Lanier']",https://arxiv.org/pdf/2211.05875,2022-11-10,,"Large language models trained for code generation can be applied to speaking virtual worlds into existence (creating virtual worlds). In this work we show that prompt-based methods can both accelerate in-VR level editing, as well as can become part of gameplay rather than just part of game development. As an example, we present Codex VR Pong which shows non-deterministic game mechanics using generative processes to not only create static content but also non-trivial interactions between 3D objects. This demonstration naturally leads to an integral discussion on how one would evaluate and benchmark experiences created by generative models - as there are no qualitative or quantitative metrics that apply in these scenarios. We conclude by discussing impending challenges of AI-assisted co-creation in VR.",632ab7663e6d64578ceda1d1df9ec525b503bacb,Semantic Scholar,,, purr efficiently editing language model hallucinations by denoising language model corruptions,"['Anthony Chen', 'Panupong Pasupat', 'Sameer Singh', 'Hongrae Lee', 'Kelvin Guu']",http://arxiv.org/pdf/2305.14908,2023-05-24,,"The remarkable capabilities of large language models have been accompanied by a persistent drawback: the generation of false and unsubstantiated claims commonly known as""hallucinations"". To combat this issue, recent research has introduced approaches that involve editing and attributing the outputs of language models, particularly through prompt-based editing. However, the inference cost and speed of using large language models for editing currently bottleneck prompt-based methods. These bottlenecks motivate the training of compact editors, which is challenging due to the scarcity of training data for this purpose. To overcome these challenges, we exploit the power of large language models to introduce corruptions (i.e., noise) into text and subsequently fine-tune compact editors to denoise the corruptions by incorporating relevant evidence. Our methodology is entirely unsupervised and provides us with faux hallucinations for training in any domain. Our Petite Unsupervised Research and Revision model, PURR, not only improves attribution over existing editing methods based on fine-tuning and prompting, but also achieves faster execution times by orders of magnitude.",7db7653c581d7823cb9c328f2d742ec70d7a0ce4,Semantic Scholar,,, zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts,"['Zewei Sun', 'Qingnan Jiang', 'Shujian Huang', 'Jun Cao', 'Shanbo Cheng', 'Mingxuan Wang']",http://arxiv.org/pdf/2209.11409,2022-09-23,,"Domain adaptation is an important challenge for neural machine translation. However, the traditional fine-tuning solution requires multiple extra training and yields a high cost. In this paper, we propose a non-tuning paradigm, resolving domain adaptation with a prompt-based method. Specifically, we construct a bilingual phrase-level database and retrieve relevant pairs from it as a prompt for the input sentences. By utilizing Retrieved Phrase-level Prompts (RePP), we effectively boost the translation quality. Experiments show that our method improves domain-specific machine translation for 6.2 BLEU scores and improves translation constraints for 11.5% accuracy without additional training.",80c0416048614be75362c2c332d22dd1d2b22f65,Semantic Scholar,,, low resource pipeline for spoken language understanding via weak supervision,"['Ayush Kumar', 'Rishabh Tripathi', 'Jithendra Vepa']",https://arxiv.org/pdf/2206.10559,2022-06-21,,"In Weak Supervised Learning (WSL), a model is trained over noisy labels obtained from semantic rules and task-specific pre-trained models. Rules offer limited generalization over tasks and require significant manual efforts while pre-trained models are available only for limited tasks. In this work, we propose to utilize prompt-based methods as weak sources to obtain the noisy labels on unannotated data. We show that task-agnostic prompts are generalizable and can be used to obtain noisy labels for different Spoken Language Understanding (SLU) tasks such as sentiment classification, disfluency detection and emotion classification. These prompts could additionally be updated to add task-specific contexts, thus providing flexibility to design task-specific prompts. We demonstrate that prompt-based methods generate reliable labels for the above SLU tasks and thus can be used as a universal weak source to train a weak-supervised model (WSM) in absence of labeled data. Our proposed WSL pipeline trained over prompt-based weak source outperforms other competitive low-resource benchmarks on zero and few-shot learning by more than 4% on Macro-F1 on all of the three benchmark SLU datasets. The proposed method also outperforms a conventional rule based WSL pipeline by more than 5% on Macro-F1.",9ecf603dbebbfbdd9858d21903c77074d12518b4,Semantic Scholar,,, instructionner a multitask instructionbased generative framework for fewshot ner,"['Liwen Wang', 'Rumei Li', 'Yang Yan', 'Yuanmeng Yan', 'Sirui Wang', 'Wei Yu Wu', 'Weiran Xu']",http://arxiv.org/pdf/2203.03903,2022-03-08,,"Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks. However, existing prompt templates are mostly designed for sentence-level tasks and are inappropriate for sequence labeling objectives. To address the above issue, we propose a multi-task instruction-based generative framework, named InstructionNER, for low-resource named entity recognition. Specifically, we reformulate the NER task as a generation problem, which enriches source sentences with task-specific instructions and answer options, then inferences the entities and types in natural language. We further propose two auxiliary tasks, including entity extraction and entity typing, which enable the model to capture more boundary information of entities and deepen the understanding of entity type semantics, respectively. Experimental results show that our method consistently outperforms other baselines on five datasets in few-shot settings.",a29a0e679e626e8961dc217081eae2a6c63a15ad,Semantic Scholar,,, stt soft template tuning for fewshot adaptation,"['Ping Yu', 'Wei Wang', 'Chunyuan Li', 'Ruiyi Zhang', 'Zhanpeng Jin', 'Changyou Chen']",https://arxiv.org/pdf/2207.08408,2022-07-18,,"Prompt tuning has been an extremely effective tool to adapt a pre-trained model to downstream tasks. However, standard prompt-based methods mainly consider the case of sufficient data of downstream tasks. It is still unclear whether the advantage can be transferred to the few-shot regime, where only limited data are available for each downstream task. Although some works have demonstrated the potential of prompt-tuning under the few-shot setting, the main stream methods via searching discrete prompts or tuning soft prompts with limited data are still very challenging. Through extensive empirical studies, we find that there is still a gap between prompt tuning and fully fine-tuning for few-shot learning. To bridge the gap, we propose a new prompt-tuning framework, called Soft Template Tuning (STT) 1. STT combines manual and auto prompts, and treats down-stream classification tasks as a masked language modeling task. Comprehensive evaluation on different settings suggests STT can close the gap between fine-tuning and prompt-based methods without introducing additional parameters. Significantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.",a45bdbbf9a197a21ef97291c60b77de47bc51db2,Semantic Scholar,,, enable language models to implicitly learn selfimprovement from data,"['Ziqi Wang', 'Le Hou', 'Tianjian Lu', 'Yuexin Wu', 'Yunxuan Li', 'Hongkun Yu', 'Heng Ji']",https://arxiv.org/pdf/2310.00898,2023-10-02,,"Large Language Models (LLMs) have demonstrated remarkable capabilities in open-ended text generation tasks. However, the inherent open-ended nature of these tasks implies that there is always room for improvement in the quality of model responses. To address this challenge, various approaches have been proposed to enhance the performance of LLMs. There has been a growing focus on enabling LLMs to self-improve their response quality, thereby reducing the reliance on extensive human annotation efforts for collecting diverse and high-quality training data. Recently, prompting-based methods have been widely explored among self-improvement methods owing to their effectiveness, efficiency, and convenience. However, those methods usually require explicitly and thoroughly written rubrics as inputs to LLMs. It is expensive and challenging to manually derive and provide all necessary rubrics with a real-world complex goal for improvement (e.g., being more helpful and less harmful). To this end, we propose an ImPlicit Self-ImprovemenT (PIT) framework that implicitly learns the improvement goal from human preference data. PIT only requires preference data that are used to train reward models without extra human efforts. Specifically, we reformulate the training objective of reinforcement learning from human feedback (RLHF) -- instead of maximizing response quality for a given input, we maximize the quality gap of the response conditioned on a reference response. In this way, PIT is implicitly trained with the improvement goal of better aligning with human preferences. Experiments on two real-world datasets and one synthetic dataset show that our method significantly outperforms prompting-based methods.",a81470aa3721f6cd8a61139f9c4c60923bee093f,Semantic Scholar,,, progressive visual prompt learning with contrastive feature reformation,"['C. Xu', 'Haocheng Shen', 'Fengyuan Shi', 'Boheng Chen', 'Yixuan Liao', 'Xiaoxin Chen', 'Limin Wang']",http://arxiv.org/pdf/2304.08386,2023-04-17,,"Prompt learning has been designed as an alternative to fine-tuning for adapting Vision-language (V-L) models to the downstream tasks. Previous works mainly focus on text prompt while visual prompt works are limited for V-L models. The existing visual prompt methods endure either mediocre performance or unstable training process, indicating the difficulty of visual prompt learning. In this paper, we propose a new Progressive Visual Prompt (ProVP) structure to strengthen the interactions among prompts of different layers. More importantly, our ProVP could effectively propagate the image embeddings to deep layers and behave partially similar to an instance adaptive prompt method. To alleviate generalization deterioration, we further propose a new contrastive feature re-formation, which prevents the serious deviation of the prompted visual feature from the fixed CLIP visual feature distribution. Combining both, our method (ProVP-Ref) is evaluated on 11 image benchmark datasets and achieves 7/11 state-of-theart results on both few-shot and base-to-novel settings. To the best of our knowledge, we are the first to demonstrate the superior performance of visual prompts in V-L models to previous prompt-based methods in downstream tasks. Meanwhile, it implies that our ProVP-Ref shows the best capability to adapt and to generalize.",ab346a9d9a71bc59671e52cae96eabba16c24eeb,Semantic Scholar,,, fewshot event detection an empirical study and a unified view,"['Yubo Ma', 'Zehao Wang', 'Yixin Cao', 'Aixin Sun']",http://arxiv.org/pdf/2305.01901,2023-05-03,,"Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress.This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including ChatGPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective baseline, which outperforms existing methods by a large margin (e.g., 2.7% F1 gains under low-resource setting).",ac7e270fcd365c84b29a710d58bf1243e850df4c,Semantic Scholar,,, qaner prompting question answering models for fewshot named entity recognition,"['Andy T. Liu', 'Wei Xiao', 'Henghui Zhu', 'Dejiao Zhang', 'Shang-Wen Li', 'Andrew O. Arnold']",http://arxiv.org/pdf/2203.01543,2022-03-03,,"Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency. However, previous prompt-based methods for few-shot NER have limitations such as a higher computational complexity, poor zero-shot ability, requiring manual prompt engineering, or lack of prompt robustness. In this work, we address these shortcomings by proposing a new prompt-based learning NER method with Question Answering (QA), called QaNER. Our approach includes 1) a refined strategy for converting NER problems into the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based tuning with QA models on a few annotated NER examples; 4) zero-shot NER by prompting the QA model. Comparing the proposed approach with previous methods, QaNER is faster at inference, insensitive to the prompt quality, and robust to hyper-parameters, as well as demonstrating significantly better low-resource performance and zero-shot capability.",b159dffadb69940e14693e812bdaa32e3957717f,Semantic Scholar,,, causal interventionbased prompt debiasing for event argument extraction,"['Jiaju Lin', 'Jie Zhou', 'Qin Chen']",http://arxiv.org/pdf/2210.01561,2022-10-04,,"Prompt-based methods have become increasingly popular among information extraction tasks, especially in low-data scenarios. By formatting a finetune task into a pre-training objective, prompt-based methods resolve the data scarce problem effectively. However, seldom do previous research investigate the discrepancy among different prompt formulating strategies. In this work, we compare two kinds of prompts, name-based prompt and ontology-base prompt, and reveal how ontology-base prompt methods exceed its counterpart in zero-shot event argument extraction (EAE) . Furthermore, we analyse the potential risk in ontology-base prompts via a causal view and propose a debias method by causal intervention. Experiments on two benchmarks demonstrate that modified by our debias method, the baseline model becomes both more effective and robust, with significant improvement in the resistance to adversarial attacks.",b1d5c08a6fb6a5ee5b6b6693e10a587733ca05ed,Semantic Scholar,,, interactivechainprompting ambiguity resolution for crosslingual conditional generation with interaction,"['Jonathan Pilault', 'Xavier García', 'Arthur Bravzinskas', 'Orhan Firat']",http://arxiv.org/pdf/2301.10309,2023-01-24,,"Crosslingual conditional generation (e.g., machine translation) has long enjoyed the benefits of scaling. Nonetheless, there are still issues that scale alone may not overcome. A source query in one language, for instance, may yield several translation options in another language without any extra context. Only one translation could be acceptable however, depending on the translator's preferences and goals. Choosing the incorrect option might significantly affect translation usefulness and quality. We propose a novel method interactive-chain prompting -- a series of question, answering and generation intermediate steps between a Translator model and a User model -- that reduces translations into a list of subproblems addressing ambiguities and then resolving such subproblems before producing the final text to be translated. To check ambiguity resolution capabilities and evaluate translation quality, we create a dataset exhibiting different linguistic phenomena which leads to ambiguities at inference for four languages. To encourage further exploration in this direction, we release all datasets. We note that interactive-chain prompting, using eight interactions as exemplars, consistently surpasses prompt-based methods with direct access to background information to resolve ambiguities.",bad6fa523ecf782c837a2eecaaffa4e1f7477c24,Semantic Scholar,,, memobert pretraining model with promptbased learning for multimodal emotion recognition,"['Jinming Zhao', 'Ruichen Li', 'Qin Jin', 'Xinchao Wang', 'Haizhou Li']",https://arxiv.org/pdf/2111.00865,2021-10-27,,"Multimodal emotion recognition study is hindered by the lack of labelled corpora in terms of scale and diversity, due to the high annotation cost and label ambiguity. In this paper, we propose a multimodal pre-training model MEmoBERT for multimodal emotion recognition, which learns multimodal joint representations through self-supervised learning from a self-collected large-scale unlabeled video data that come in sheer volume. Furthermore, unlike the conventional ""pre-train, finetune"" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction one, bringing the downstream task closer to the pre-training. Extensive experiments on two benchmark datasets, IEMOCAP and MSP-IMPROV, show that our proposed MEmoBERT significantly enhances emotion recognition performance.",c10ab4733b43f19547308c15ca231a668181a36c,Semantic Scholar,,, adaprompt adaptive model training for promptbased nlp,"['Yulong Chen', 'Yang Liu', 'Li Dong', 'Shuohang Wang', 'Chenguang Zhu', 'Michael Zeng', 'Yue Zhang']",https://aclanthology.org/2022.findings-emnlp.448.pdf,2022-02-10,,"Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in community. The main idea is to bridge the gap between NLP downstream tasks and language modeling (LM), by mapping these tasks into natural language prompts, which are then filled by pre-trained language models (PLMs). However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining. First, prompt information is not necessarily sufficiently present during LM pretraining. Second, task-specific data are not necessarily well represented during pretraining. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35\% relative error reduction.",d235a9085e0543fcbe502fbc269f9a8ee01dcbab,Semantic Scholar,,, convfinqa exploring the chain of numerical reasoning in conversational finance question answering,"['Zhiyu Chen', 'SHIYANG LI', 'Charese Smiley', 'Zhiqiang Ma', 'Sameena Shah', 'William Yang Wang']",http://arxiv.org/pdf/2210.03849,2022-10-07,,"With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA.",d96997265f8146e93b4c9350f19d55e46d1317f0,Semantic Scholar,,, exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods,"['Mengsay Loem', 'Masahiro Kaneko', 'Sho Takase', 'Naoaki Okazaki']",http://arxiv.org/pdf/2305.18156,2023-05-29,,"Large-scale pre-trained language models such as GPT-3 have shown remarkable performance across various natural language processing tasks. However, applying prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks and their controllability remains underexplored. Controllability in GEC is crucial for real-world applications, particularly in educational settings, where the ability to tailor feedback according to learner levels and specific error types can significantly enhance the learning process.This paper investigates the performance and controllability of prompt-based methods with GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact of task instructions and examples on GPT-3’s output, focusing on controlling aspects such as minimal edits, fluency edits, and learner levels. Our findings demonstrate that GPT-3 could effectively perform GEC tasks, outperforming existing supervised and unsupervised approaches. We also showed that GPT-3 could achieve controllability when appropriate task instructions and examples are given.",db0d67057b41927b5b51d3a393c250be64a405ae,Semantic Scholar,,, selfevolve a code evolution framework via large language models,"['Shuyang Jiang', 'Yuhao Wang', 'Yu Wang']",http://arxiv.org/pdf/2306.02907,2023-06-05,,"Large language models (LLMs) have already revolutionized code generation, after being pretrained on publicly available code data. However, while various methods have been proposed to augment LLMs with retrieved knowledge and enhance the quality of code generation, the performance of these retrieval-based methods is limited by the strength of the retrievers used. In addition, while LLMs show great emergent ability, they still struggle to produce the correct code in one turn. To address these challenges, we propose a novel two-step pipeline, called \autoknow, that leverages LLMs as both knowledge providers and self-reflective programmers. Unlike retrieval-based methods, \autoknow~obtains the knowledge from input prompts and generates intermediate code based on the generated knowledge. After that, \autoknow~asks LLM to act as an expert programmer to perform debugging for the generated code. This is achieved by receiving the error message from the interpreter, without requiring special test cases for correctness verification. We evaluate \autoknow~on three code generation datasets, including DS-1000 for data science code, HumanEval for software engineering code, and TransCoder for C++-to-Python translation. Our empirical experiments show that \autoknow~outperforms strong baselines by a significant margin on all datasets. We also conduct exhaustive analytical experiments to validate the effectiveness of the two stages of \autoknow, and find that both are superior to other prompting-based methods. Further scalability analysis demonstrates that \autoknow~can be adapted to other more advanced models, such as GPT-4, and bring consistent efficacy improvement.",eb36681fc4c5dfce4f3e05540fc92b007de278ca,Semantic Scholar,,, zeroshot information extraction via chatting with chatgpt,"['Xiang Wei', 'Xingyu Cui', 'Ning Cheng', 'Xiaobin Wang', 'Xin Zhang', 'Shen Huang', 'Pengjun Xie', 'Jinan Xu', 'Yufeng Chen', 'Meishan Zhang', 'Yong Jiang', 'Wenjuan Han']",http://arxiv.org/pdf/2302.10205,2023-02-20,,"Zero-shot information extraction (IE) aims to build IE systems from the unannotated text. It is challenging due to involving little human intervention. Challenging but worthwhile, zero-shot IE reduces the time and effort that data labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3, ChatGPT) show promising performance on zero-shot settings, thus inspiring us to explore prompt-based methods. In this work, we ask whether strong IE models can be constructed by directly prompting LLMs. Specifically, we transform the zero-shot IE task into a multi-turn question-answering problem with a two-stage framework (ChatIE). With the power of ChatGPT, we extensively evaluate our framework on three IE tasks: entity-relation triple extract, named entity recognition, and event extraction. Empirical results on six datasets across two languages show that ChatIE achieves impressive performance and even surpasses some full-shot models on several datasets (e.g., NYT11-HRL). We believe that our work could shed light on building IE models with limited resources.",f4cba0db34aa0c389cec267ca1f3ba5255ea2645,Semantic Scholar,,, scaling sentence embeddings with large language models,"['Ting Jiang', 'Shaohan Huang', 'Zhongzhi Luan', 'Deqing Wang', 'Fuzhen Zhuang']",https://arxiv.org/pdf/2307.16645,2023-07-31,,"Large language models (LLMs) have recently garnered significant interest. With in-context learning, LLMs achieve impressive results in various natural language tasks. However, the application of LLMs to sentence embeddings remains an area of ongoing research. In this work, we propose an in-context learning-based method aimed at improving sentence embeddings performance. Our approach involves adapting the previous prompt-based representation method for autoregressive models, constructing a demonstration set that enables LLMs to perform in-context learning, and scaling up the LLMs to different model sizes. Through extensive experiments, in-context learning enables LLMs to generate high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve performance comparable to current contrastive learning methods. By scaling model size, we find scaling to more than tens of billion parameters harms the performance on semantic textual similarity (STS) tasks. However, the largest model outperforms other counterparts and achieves the new state-of-the-art result on transfer tasks. We also fine-tune LLMs with current contrastive learning approach, and the 2.7B OPT model, incorporating our prompt-based method, surpasses the performance of 4.8B ST5, achieving the new state-of-the-art results on STS tasks. Our code is available at https://github.com/kongds/scaling_sentemb.",f7ccf8ecd508e0b2d423169588dd1c1a82dd3b4d,Semantic Scholar,,, prompting to distill boosting datafree knowledge distillation via reinforced prompt,"['Xinyin Ma', 'Xinchao Wang', 'Gongfan Fang', 'Yongliang Shen', 'Weiming Lu']",https://arxiv.org/pdf/2205.07523,2022-05-16,,"Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models. At the heart of DFKD is to reconstruct a synthetic dataset by inverting the parameters of the uncompressed model. Prior DFKD approaches, however, have largely relied on hand-crafted priors of the target data distribution for the reconstruction, which can be inevitably biased and often incompetent to capture the intrinsic distributions. To address this problem, we propose a prompt-based method, termed as PromptDFD, that allows us to take advantage of learned language priors, which effectively harmonizes the synthetic sentences to be semantically and grammatically correct. Specifically, PromptDFD leverages a pre-trained generative model to provide language priors and introduces a reinforced topic prompter to control data synthesis, making the generated samples thematically relevant and semantically plausible, and thus friendly to downstream tasks. As shown in our experiments, the proposed method substantially improves the synthesis quality and achieves considerable improvements on distillation performance. In some cases, PromptDFD even gives rise to results on par with those from the data-driven knowledge distillation with access to the original training data.",fb1d85fe28b5e92e22d084eca674d4a2b48cdc5a,Semantic Scholar,,, are hard examples also harder to explain a study with human and modelgenerated explanations,"['Swarnadeep Saha', 'Peter Hase', 'Nazneen Rajani', 'Mohit Bansal']",https://arxiv.org/pdf/2211.07517,2022-11-14,,"Recent work on explainable NLP has shown that few-shot prompting can enable large pre-trained language models (LLMs) to generate grammatical and factual natural language explanations for data labels. In this work, we study the connection between explainability and sample hardness by investigating the following research question – “Are LLMs and humans equally good at explaining data labels for both easy and hard samples?” We answer this question by first collecting human-written explanations in the form of generalizable commonsense rules on the task of Winograd Schema Challenge (Winogrande dataset). We compare these explanations with those generated by GPT-3 while varying the hardness of the test samples as well as the in-context samples. We observe that (1) GPT-3 explanations are as grammatical as human explanations regardless of the hardness of the test samples, (2) for easy examples, GPT-3 generates highly supportive explanations but human explanations are more generalizable, and (3) for hard examples, human explanations are significantly better than GPT-3 explanations both in terms of label-supportiveness and generalizability judgements. We also find that hardness of the in-context examples impacts the quality of GPT-3 explanations. Finally, we show that the supportiveness and generalizability aspects of human explanations are also impacted by sample hardness, although by a much smaller margin than models.",0040dac7a1bf7a1eeb01c86ddb993f331f35b158,Semantic Scholar,,, controllable generation of dialogue acts for dialogue systems via fewshot response generation and ranking,"['Angela Ramirez', 'Karik Agarwal', 'Juraj Juraska', 'Utkarsh Garg', 'M. Walker']",https://arxiv.org/pdf/2307.14440,2023-07-26,,"Dialogue systems need to produce responses that realize multiple types of dialogue acts (DAs) with high semantic fidelity. In the past, natural language generators (NLGs) for dialogue were trained on large parallel corpora that map from a domain-specific DA and its semantic attributes to an output utterance. Recent work shows that pretrained language models (LLMs) offer new possibilities for controllable NLG using prompt-based learning. Here we develop a novel few-shot overgenerate-and-rank approach that achieves the controlled generation of DAs. We compare eight few-shot prompt styles that include a novel method of generating from textual pseudo-references using a textual style transfer approach. We develop six automatic ranking functions that identify outputs with both the correct DA and high semantic accuracy at generation time. We test our approach on three domains and four LLMs. To our knowledge, this is the first work on NLG for dialogue that automatically ranks outputs using both DA and attribute accuracy. For completeness, we compare our results to fine-tuned few-shot models trained with 5 to 100 instances per DA. Our results show that several prompt settings achieve perfect DA accuracy, and near perfect semantic accuracy (99.81%) and perform better than few-shot fine-tuning.",03d8b1e78d124a561f3c2a67d3199472ee73228d,Semantic Scholar,,, lambada backward chaining for automated reasoning in natural language,"['Seyed Mehran Kazemi', 'Najoung Kim', 'Deepti Bhatia', 'Xinyuan Xu', 'Deepak Ramachandran']",http://arxiv.org/pdf/2212.13894,2022-12-20,,"Remarkable progress has been made on automated reasoning with natural text, by using Large Language Models (LLMs) and methods such as Chain-of-Thought prompting and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from intended conclusion to supporting axioms) is significantly more efficient at proof-finding. Importing this intuition into the LM setting, we develop a Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules, that are simply implemented by few-shot prompted LLM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required.",03fb95e6be583ca954c3d00812a9e9a40f118e51,Semantic Scholar,,, skillbased fewshot selection for incontext learning,"['Shengnan An', 'Bo Zhou', 'Zeqi Lin', 'Qiang Fu', 'B. Chen', 'Nanning Zheng', 'Weizhu Chen', 'Jian-Guang Lou']",https://arxiv.org/pdf/2305.14210,2023-05-23,,"In-context learning is the paradigm that adapts large language models to downstream tasks by providing a few examples. Few-shot selection -- selecting appropriate examples for each test instance separately -- is important for in-context learning. In this paper, we propose Skill-KNN, a skill-based few-shot selection method for in-context learning. The key advantages of Skill-KNN include: (1) it addresses the problem that existing methods based on pre-trained embeddings can be easily biased by surface natural language features that are not important for the target task; (2) it does not require training or fine-tuning of any models, making it suitable for frequently expanding or changing example banks. The key insight is to optimize the inputs fed into the embedding model, rather than tuning the model itself. Technically, Skill-KNN generates the skill-based descriptions for each test case and candidate example by utilizing a pre-processing few-shot prompting, thus eliminating unimportant surface features. Experimental results across five cross-domain semantic parsing datasets and six backbone models show that Skill-KNN significantly outperforms existing methods.",04526876688e5a56106629229309fae272da1c79,Semantic Scholar,,, echoprompt instructing the model to rephrase queries for improved incontext learning,"['Rajasekhar Reddy Mekala', 'Yasaman Razeghi', 'Sameer Singh']",https://arxiv.org/pdf/2309.10687,2023-09-16,,"Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate the factors contributing to EchoPrompt's effectiveness through ablation studies, which reveal that both the original query and the model-generated rephrased version are instrumental in its performance gains. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts.",04e838c16f3d1fb8d69d34fe0a0a92c59717875b,Semantic Scholar,,, improved compositional generalization by generating demonstrations for metalearning,"['Sam Spilsbury', 'A. Ilin']",http://arxiv.org/pdf/2305.13092,2023-05-22,,"Meta-learning and few-shot prompting are viable methods to induce certain types of compositional behaviour. However, these methods can be very sensitive to the choice of support examples used. Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider a grounded language learning problem (gSCAN) where good support examples for certain test splits might not even exist in the training data, or would be infeasible to search for. We design an agent which instead generates possible supports which are relevant to the test query and current state of the world, then uses these supports via meta-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits. Further experiments show that in this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.",088ba3cfb904ccd0aa1993a1e30c725b061aad7e,Semantic Scholar,,, fantastically ordered prompts and where to find them overcoming fewshot prompt order sensitivity,"['Yao Lu', 'Max Bartolo', 'Alastair Moore', 'S. Riedel', 'Pontus Stenetorp']",https://aclanthology.org/2022.acl-long.556.pdf,2021-04-18,,"When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.",0adec918885dff698acf359988ed79a543157f80,Semantic Scholar,,, crowd score a method for the evaluation of jokes using large language model ai voters as judges,"['Fabrício Góes', 'Zisen Zhou', 'Piotr Sawicki', 'M. Grzes', 'Daniel Brown']",http://arxiv.org/pdf/2212.11214,2022-12-21,,"This paper presents the Crowd Score, a novel method to assess the funniness of jokes using large language models (LLMs) as AI judges. Our method relies on inducing different personalities into the LLM and aggregating the votes of the AI judges into a single score to rate jokes. We validate the votes using an auditing technique that checks if the explanation for a particular vote is reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating. Our results show that few-shot prompting leads to better results than zero-shot for the voting question. Personality induction showed that aggressive and self-defeating voters are significantly more inclined to find more jokes funny of a set of aggressive/self-defeating jokes than the affiliative and self-enhancing voters. The Crowd Score follows the same trend as human judges by assigning higher scores to jokes that are also considered funnier by human judges. We believe that our methodology could be applied to other creative domains such as story, poetry, slogans, etc. It could both help the adoption of a flexible and accurate standard approach to compare different work in the CC community under a common metric and by minimizing human participation in assessing creative artefacts, it could accelerate the prototyping of creative artefacts and reduce the cost of hiring human participants to rate creative artefacts. 1",0ba5fb80d2c3ea3a8505415e32d954b4e4eea170,Semantic Scholar,,, art automatic multistep reasoning and tooluse for large language models,"['Bhargavi Paranjape', 'Scott M. Lundberg', 'Sameer Singh', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer', 'Marco Tulio Ribeiro']",http://arxiv.org/pdf/2303.09014,2023-03-16,,"Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings by generating intermediate chain of thought (CoT) reasoning steps. Further, each reasoning step can rely on external tools to support computation beyond the core LLM capabilities (e.g. search/running code). Prior work on CoT prompting and tool use typically requires hand-crafting task-specific demonstrations and carefully scripted interleaving of model generations with tool use. We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program. Given a new task to solve, ART selects demonstrations of multi-step reasoning and tool use from a task library. At test time, ART seamlessly pauses generation whenever external tools are called, and integrates their output before resuming generation. ART achieves a substantial improvement over few-shot prompting and automatic CoT on unseen tasks in the BigBench and MMLU benchmarks, and matches performance of hand-crafted CoT prompts on a majority of these tasks. ART is also extensible, and makes it easy for humans to improve performance by correcting errors in task-specific programs or incorporating new tools, which we demonstrate by drastically improving performance on select tasks with minimal human intervention.",0d42221038c05cee8443c5b5af838505ee137dc3,Semantic Scholar,,, promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models,"['Mirac Suzgun', 'Luke Melas-Kyriazi', 'Dan Jurafsky']",https://arxiv.org/pdf/2205.11503,2022-05-23,,"We propose a method for arbitrary textual style transfer (TST)—the task of transforming a text into any given style—utilizing general-purpose pre-trained language models. Our method, Prompt-and-Rerank, is based on a mathematical formulation of the TST task, decomposing it into three constituent components: textual similarity, target style strength, and fluency. Our method uses zero-shot or few-shot prompting to obtain a set of candidate generations in the target style, and then re-ranks them according to the three components. Our method enables small pre-trained language models to perform on par with state-of-the-art large-scale models while using two orders of magnitude less compute and memory. We also investigate the effect of model size and prompt design (e.g., prompt paraphrasing and delimiter-pair choice) on style transfer quality across seven diverse textual style transfer datasets, finding, among other things, that delimiter-pair choice has a large impact on performance, and that models have biases on the direction of style transfer.",0d6bb585493e34975f0437faa3179db3a02f6ae8,Semantic Scholar,,, teaching arithmetic to small transformers,"['Nayoung Lee', 'Kartik K. Sreenivasan', 'Jason D. Lee', 'Kangwook Lee', 'Dimitris Papailiopoulos']",https://arxiv.org/pdf/2307.03381,2023-07-07,,"Large language models like GPT-4 exhibit emergent capabilities across general-purpose tasks, such as basic arithmetic, when trained on extensive text data, even though these tasks are not explicitly encoded by the unsupervised, next-token prediction objective. This study investigates how small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective. We first demonstrate that conventional training data is not the most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp phase transitions as a function of training data scale, which, in some cases, can be explained through connections to low-rank matrix completion. Building on prior work, we then train on chain-of-thought style data that includes intermediate step results. Even in the complete absence of pretraining, this approach significantly and simultaneously improves accuracy, sample complexity, and convergence speed. We also study the interplay between arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and model scale. Additionally, we discuss length generalization challenges. Our work highlights the importance of high-quality, instructive data that considers the particular characteristics of the next-word prediction objective for rapidly eliciting arithmetic capabilities.",0db0af0cd3ceb0531a050a03e6ceb849580ff53b,Semantic Scholar,,, generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models,"['Varun Nair', 'Elliot Schumacher', 'Anitha Kannan']",http://arxiv.org/pdf/2305.05982,2023-05-10,,"A medical provider’s summary of a patient visit serves several critical purposes, including clinical decision-making, facilitating hand-offs between providers, and as a reference for the patient. An effective summary is required to be coherent and accurately capture all the medically relevant information in the dialogue, despite the complexity of patient-generated language. Even minor inaccuracies in visit summaries (for example, summarizing “patient does not have a fever” when a fever is present) can be detrimental to the outcome of care for the patient.This paper tackles the problem of medical conversation summarization by discretizing the task into several smaller dialogue-understanding tasks that are sequentially built upon. First, we identify medical entities and their affirmations within the conversation to serve as building blocks. We study dynamically constructing few-shot prompts for tasks by conditioning on relevant patient information and use GPT-3 as the backbone for our experiments. We also develop GPT-derived summarization metrics to measure performance against reference summaries quantitatively. Both our human evaluation study and metrics for medical correctness show that summaries generated using this approach are clinically accurate and outperform the baseline approach of summarizing the dialog in a zero-shot, single-prompt setting.",0f0a973c6457bcaf7255f891f9b34d658a0a84ae,Semantic Scholar,,, learning performanceimproving code edits,"['Aman Madaan', 'Alex Shypula', 'Uri Alon', 'Milad Hashemi', 'Parthasarathy Ranganathan', 'Yiming Yang', 'Graham Neubig', 'A. Yazdanbakhsh']",http://arxiv.org/pdf/2302.07867,2023-02-15,,"The waning of Moore's Law has shifted the focus of the tech industry towards alternative methods for continued performance gains. While optimizing compilers are a standard tool to help increase program efficiency, programmers continue to shoulder much responsibility in crafting and refactoring code with better performance characteristics. In this paper, we investigate the ability of large language models (LLMs) to suggest functionally correct, performance improving code edits. We hypothesize that language models can suggest such edits in ways that would be impractical for static analysis alone. We investigate these questions by curating a large-scale dataset of Performance-Improving Edits, PIE. PIE contains trajectories of programs, where a programmer begins with an initial, slower version and iteratively makes changes to improve the program's performance. We use PIE to evaluate and improve the capacity of large language models. Specifically, use examples from PIE to fine-tune multiple variants of CODEGEN, a billion-scale Transformer-decoder model. Additionally, we use examples from PIE to prompt OpenAI's CODEX using a few-shot prompting. By leveraging PIE, we find that both CODEX and CODEGEN can generate performance-improving edits, with speedups of more than 2.5x for over 25% of the programs, for C++ and Python, even after the C++ programs were compiled using the O3 optimization level. Crucially, we show that PIE allows CODEGEN, an open-sourced and 10x smaller model than CODEX, to match the performance of CODEX on this challenging task. Overall, this work opens new doors for creating systems and methods that can help programmers write efficient code.",1786a2f9140ed7211b21302977de64e948b92308,Semantic Scholar,,, prompting palm for translation assessing strategies and performance,"['David Vilar', 'Markus Freitag', 'Colin Cherry', 'Jiaming Luo', 'Viresh Ratnakar', 'George F. Foster']",http://arxiv.org/pdf/2211.09102,2022-11-16,,"Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM’s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM’s MT output which reveals some interesting properties and prospects for future work.",197ba7bbfdbb052b0770088815c110774220f397,Semantic Scholar,,, contextual biasing of namedentities with large language models,"['Chuanneng Sun', 'Zeeshan Ahmed', 'Yingyi Ma', 'Zhe Liu', 'Yutong Pang', 'Ozlem Kalinli']",https://arxiv.org/pdf/2309.00723,2023-09-01,,"This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.",1ed5d06c4dc46e6a983597b740ab0a31d0ce22ad,Semantic Scholar,,, mixpro simple yet effective data augmentation for promptbased learning,"['Bohan Li', 'Longxu Dou', 'Yutai Hou', 'Yunlong Feng', 'Honglin Mu', 'Wanxiang Che']",http://arxiv.org/pdf/2304.09402,2023-04-19,,"Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template. This approach demonstrates its effectiveness, especially in few-shot learning scenarios, where the model is trained on a scarce amount of data. Despite its successes, the limited templates and text in few-shot prompt-based learning scenarios leave significant room for performance improvement. Moreover, existing methods sometimes resort to model ensembles, which, while effective, could potentially hamper model efficiency due to increased computational demands. To address these issues, we introduce MixPro, an augmentation method designed to augment both the vanilla input text and the templates. We implement this through the token-level, the sentence-level, and the template-level Mixup strategies. The experimental results on five few-shot datasets show that MixPro outperforms other augmentation baselines, improving model performance by an average of 5.08% compared to before augmentation.",1f0dfbbc13ac31de8709bbb4d0f6478aa1222cef,Semantic Scholar,,, mapl parameterefficient adaptation of unimodal pretrained models for visionlanguage fewshot prompting,"['Oscar Mañas', 'Pau Rodríguez López', 'Saba Ahmadi', 'Aida Nematzadeh', 'Yash Goyal', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2210.07179,2022-10-13,,"Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. We propose MAPL, a simple and parameter-efficient method that reuses frozen pre-trained unimodal models and leverages their strong generalization capabilities in multimodal vision-language (VL) settings. MAPL learns a lightweight mapping between the representation spaces of unimodal models using aligned image-text data, and can generalize to unseen VL tasks from just a few in-context examples. The small number of trainable parameters makes MAPL effective at low-data and in-domain learning. Moreover, MAPL’s modularity enables easy extension to other pre-trained models. Extensive experiments on several visual question answering and image captioning benchmarks show that MAPL achieves superior or competitive performance compared to similar methods while training orders of magnitude fewer parameters. MAPL can be trained in just a few hours using modest computational resources and public datasets. We release our code and pre-trained model weights at https://github.com/oscmansan/mapl.",1f86bf1e334200ec0481349255559fbfe7a33caa,Semantic Scholar,,, dspy compiling declarative language model calls into selfimproving pipelines,"['O. Khattab', 'Arnav Singhvi', 'Paridhi Maheshwari', 'Zhiyuan Zhang', 'Keshav Santhanam', 'Sri Vardhamanan', 'Saiful Haq', 'Ashutosh Sharma', 'Thomas T. Joshi', 'Hanna Moazam', 'Heather Miller', 'Matei Zaharia', 'Christopher Potts']",https://arxiv.org/pdf/2310.03714,2023-10-05,,"The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded""prompt templates"", i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting (generally by over 25% and 65%, respectively) and pipelines with expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at https://github.com/stanfordnlp/dspy",2069aaaa281eb13bcd9330fc4d43f24f6b436a53,Semantic Scholar,,, interrolang exploring nlp models and datasets through dialoguebased explanations,"['Nils Feldhus', 'Qianli Wang', 'Tatiana Anikina', 'Sahil Chopra', 'Cennet Oguz', 'Sebastian Möller']",https://arxiv.org/pdf/2310.05592,2023-10-09,,"While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.",2522410b1cac0c14fa656a0aaeaff08bacb358a9,Semantic Scholar,,, multilingual evaluation of code generation models,"['Ben Athiwaratkun', 'Sanjay Krishna Gouda', 'Zijian Wang', 'Xiaopeng Li', 'Yuchen Tian', 'Ming Tan', 'Wasi Uddin Ahmad', 'Shiqi Wang', 'Qing Sun', 'Mingyue Shang', 'Sujan Kumar Gonugondla', 'Hantian Ding', 'Varun Kumar', 'Nathan Fulton', 'A. Farahani', 'Siddharth Jain', 'Robert Giaquinto', 'Haifeng Qian', 'M. Ramanathan', 'Ramesh Nallapati', 'Baishakhi Ray', 'Parminder Bhatia', 'Sudipta Sengupta', 'D. Roth', 'Bing Xiang']",http://arxiv.org/pdf/2210.14868,2022-10-27,,"We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming languages and are generated using a scalable conversion framework that transpiles prompts and test cases from the original Python datasets into the corresponding data in the target language. Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings. Furthermore, we use our code generation model to perform large-scale bootstrapping to obtain synthetic canonical solutions in several languages, which can be used for other code-related evaluations such as code insertion, robustness, or summarization tasks. Overall, our benchmarks represents a significant step towards a deeper understanding of language models' code generation abilities. We publicly release our code and datasets at https://github.com/amazon-research/mxeval.",2577d053f8aab912d29b424e1f09133d83740fd2,Semantic Scholar,,, towards using fewshot prompt learning for automating model completion,"['Meriem Ben Chaaben', 'Lola Burgueño', 'H. Sahraoui']",https://arxiv.org/pdf/2212.03404,2022-12-07,,We propose a simple yet a novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.,2a99239f09e95f4dbccec572d66f4519206762f9,Semantic Scholar,,, "better patching using llm prompting, via selfconsistency","['Toufique Ahmed', 'Prem Devanbu']",https://arxiv.org/pdf/2306.00108,2023-05-31,,"Large Language models (LLMs) can be induced to solve non-trivial problems with “few-shot” prompts including illustrative problem-solution examples. Now if the few-shots also include “chain of thought” ($\mathcal{C}oT$) explanations, which are of the form problem-explanation-solution, LLMs will generate a “explained” solution, and perform even better. Recently an exciting, substantially better technique, self-consistency [1] ($\mathcal{S}-C$) has emerged, based on the intuition that there are many plausible explanations for the right solution; when the LLM is sampled repeatedly to generate a pool of explanation-solution pairs, for a given problem, the most frequently occurring solutions in the pool (ignoring the explanations) tend to be even more likely to be correct! Unfortunately, the use of this highly-performant $\mathcal{S}-C$ (or even $\mathcal{C}oT$) approach in software engineering settings is hampered by the lack of explanations; most software datasets lack explanations. In this paper, we describe an application of the $\mathcal{S}-C$ approach to program repair, using the commit log on the fix as the explanation, only in the illustrative few-shots. We achieve state-of-the art results, beating previous approaches to prompting-based program repair, on the MODIT dataset; we also find evidence suggesting that the correct commit messages are helping the LLM learn to produce better patches.",32426b96ff3c680125bde3b835bfa931288b8ade,Semantic Scholar,,, large language model augmented narrative driven recommendations,"['Sheshera Mysore', 'A. McCallum', 'Hamed Zamani']",https://arxiv.org/pdf/2306.02250,2023-06-04,,"Narrative-driven recommendation (NDR) presents an information access problem where users solicit recommendations with verbose descriptions of their preferences and context, for example, travelers soliciting recommendations for points of interest while describing their likes/dislikes and travel circumstances. These requests are increasingly important with the rise of natural language-based conversational interfaces for search and recommendation systems. However, NDR lacks abundant training data for models, and current platforms commonly do not support these requests. Fortunately, classical user-item interaction datasets contain rich textual data, e.g., reviews, which often describe user preferences and context – this may be used to bootstrap training for NDR models. In this work, we explore using large language models (LLMs) for data augmentation to train NDR models. We use LLMs for authoring synthetic narrative queries from user-item interactions with few-shot prompting and train retrieval models for NDR on synthetic queries and user-item interaction data. Our experiments demonstrate that this is an effective strategy for training small-parameter retrieval models that outperform other retrieval and LLM baselines for narrative-driven recommendation.",3566e1245bfc90096fe0cdb8b18674da6519c8d6,Semantic Scholar,,, a comprehensive survey on pretrained foundation models a history from bert to chatgpt,"['Ce Zhou', 'Qian Li', 'Chen Li', 'Jun Yu', 'Yixin Liu', 'Guan Wang', 'Kaichao Zhang', 'Cheng Ji', 'Qi Yan', 'Lifang He', 'Hao Peng', 'Jianxin Li', 'Jia Wu', 'Ziwei Liu', 'P. Xie', 'Caiming Xiong', 'Jian Pei', 'Philip S. Yu', 'Lichao Sun Michigan State University', 'B. University', 'Lehigh University', 'M. University', 'Nanyang Technological University', 'University of California at San Diego', 'D. University', 'U. Chicago', 'S. Research']",http://arxiv.org/pdf/2302.09419,2023-02-18,,"Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. BERT learns bidirectional encoder representations from Transformers, which are trained on large datasets as contextual language models. Similarly, the generative pretrained transformer (GPT) method employs Transformers as the feature extractor and is trained using an autoregressive paradigm on large datasets. Recently, ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few shot prompting. The remarkable achievements of PFM have brought significant breakthroughs to various fields of AI. Numerous studies have proposed different methods, raising the demand for an updated survey. This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. The review covers the basic components and existing pretraining methods used in natural language processing, computer vision, and graph learning. Additionally, it explores advanced PFMs used for different data modalities and unified PFMs that consider data quality and quantity. The review also discusses research related to the fundamentals of PFMs, such as model efficiency and compression, security, and privacy. Finally, the study provides key implications, future research directions, challenges, and open problems in the field of PFMs. Overall, this survey aims to shed light on the research of the PFMs on scalability, security, logical reasoning ability, cross-domain learning ability, and the user-friendly interactive ability for artificial general intelligence.",3599a236f285af48782fc30b1341d13ec7320735,Semantic Scholar,,, language model crossover variation through fewshot prompting,"['Elliot Meyerson', 'M. Nelson', 'Herbie Bradley', 'Arash Moradi', 'Amy K. Hoover', 'J. Lehman']",https://arxiv.org/pdf/2302.12170,2023-02-23,,"This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text.",3841234dd49250c4fcbba79eed6593d3b57932c1,Semantic Scholar,,, mathattack attacking large language models towards math solving ability,"['Zihao Zhou', 'Qiufeng Wang', 'Mingyu Jin', 'Jie Yao', 'Jianan Ye', 'Wei Liu', 'Wei Wang', 'Xiaowei Huang', 'Kaizhu Huang']",https://arxiv.org/pdf/2309.01686,2023-09-04,,"With the boom of Large Language Models (LLMs), the research of solving Math Word Problem (MWP) has recently made great progress. However, there are few studies to examine the security of LLMs in math solving ability. Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of security in solving math problems. Compared to traditional text adversarial attack, it is essential to preserve the mathematical logic of original MWPs during the attacking. To this end, we propose logical entity recognition to identify logical entries which are then frozen. Subsequently, the remaining text are attacked by adopting a word-level attacker. Furthermore, we propose a new dataset RobustMath to evaluate the robustness of LLMs in math solving ability. Extensive experiments on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth show that MathAttack could effectively attack the math solving ability of LLMs. In the experiments, we observe that (1) Our adversarial samples from higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy (e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot prompts); (2) Complex MWPs (such as more solving steps, longer text, more numbers) are more vulnerable to attack; (3) We can improve the robustness of LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our practice and observation can serve as an important attempt towards enhancing the robustness of LLMs in math solving ability. We will release our code and dataset.",3886f3bd2a0af9e75bf9fa5b7db4224969dbf346,Semantic Scholar,,, fineval a chinese financial domain knowledge evaluation benchmark for large language models,"['Liwen Zhang', 'Wei Cai', 'Zhaowei Liu', 'Zhi Yang', 'Wei Dai', 'Yujie Liao', 'Qi Qin', 'Yifei Li', 'Xingxian Liu', 'Zhiqiang Liu', 'Zhoufan Zhu', 'Anbo Wu', 'Xinnan Guo', 'Yun Chen']",https://arxiv.org/pdf/2308.09975,2023-08-19,,"Large language models (LLMs) have demonstrated exceptional performance in various natural language processing tasks, yet their efficacy in more challenging and domain-specific tasks remains largely unexplored. This paper presents FinEval, a benchmark specifically designed for the financial domain knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice questions covering Finance, Economy, Accounting, and Certificate. It includes 4,661 questions spanning 34 different academic subjects. To ensure a comprehensive model performance evaluation, FinEval employs a range of prompt types, including zero-shot and few-shot prompts, as well as answer-only and chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs on FinEval, the results show that only GPT-4 achieved an accuracy close to 70% in different prompt settings, indicating significant growth potential for LLMs in the financial domain knowledge. Our work offers a more comprehensive financial knowledge evaluation benchmark, utilizing data of mock exams and covering a wide range of evaluated LLMs.",3b88526a0f0337e3a6b632b4af8fd0882eb4b470,Semantic Scholar,,, model ensemble instead of prompt fusion a samplespecific knowledge transfer method for fewshot prompt tuning,"['Xiangyu Peng', 'Chen Xing', 'Prafulla Kumar Choubey', 'Chien-Sheng Wu', 'Caiming Xiong']",http://arxiv.org/pdf/2210.12587,2022-10-23,,"Prompt tuning approaches, which learn task-specific soft prompts for a downstream task conditioning on frozen pre-trained models, have attracted growing interest due to its parameter efficiency. With large language models and sufficient training data, prompt tuning performs comparably to full-model tuning. However, with limited training samples in few-shot settings, prompt tuning fails to match the performance of full-model fine-tuning. In this work, we focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks. Recognizing the good generalization capabilities of ensemble methods in low-data regime, we first experiment and show that a simple ensemble of model predictions based on different source prompts, outperforms existing multi-prompt knowledge transfer approaches such as source prompt fusion in the few-shot setting. Motivated by this observation, we further investigate model ensembles and propose Sample-specific Ensemble of Source Models (SESoM). SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs. Through this way, SESoM inherits the superior generalization of model ensemble approaches and simultaneously captures the sample-specific competence of each source prompt. We conduct experiments across a diverse set of eight NLP tasks using models of different scales (T5-{base, large, XL}) and find that SESoM consistently outperforms the existing models of the same as well as larger parametric scale by a large margin.",3d7d385d9ee75a286e8da27f7d3cf9f12651c899,Semantic Scholar,,, code as policies language model programs for embodied control,"['Jacky Liang', 'Wenlong Huang', 'F. Xia', 'Peng Xu', 'Karol Hausman', 'Brian Ichter', 'Peter R. Florence', 'Andy Zeng']",https://arxiv.org/pdf/2209.07753,2022-09-16,,"Large language models (LLMs) trained on code-completion have been shown to be capable of synthesizing simple Python programs from docstrings [1]. We find that these code-writing LLMs can be re-purposed to write robot policy code, given natural language commands. Specifically, policy code can express functions or feedback loops that process perception outputs (e.g., from object detectors [2], [3]) and parameterize control primitive APIs. When provided as input several example language commands (formatted as comments) followed by corresponding policy code (via few-shot prompting), LLMs can take in new commands and autonomously re-compose API calls to generate new policy code respectively. By chaining classic logic structures and referencing third-party libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way can write robot policies that (i) exhibit spatial-geometric reasoning, (ii) generalize to new instructions, and (iii) prescribe precise values (e.g., velocities) to ambiguous descriptions (‘faster’) depending on context (i.e., behavioral commonsense). This paper presents Code as Policies: a robot-centric formulation of language model generated programs (LMPs) that can represent reactive policies (e.g., impedance controllers), as well as waypoint-based policies (vision-based pick and place, trajectory-based control), demonstrated across multiple real robot platforms. Central to our approach is prompting hierarchical code-gen (recursively defining undefined functions), which can write more complex code and also improves state-of-the-art to solve 39.8% of problems on the HumanEval [1] benchmark. Code and videos are available at https://code-as-policies.github.io",41531594d7e0f3b2e138ae43e0a0f6e24a9b014c,Semantic Scholar,,, tool documentation enables zeroshot toolusage with large language models,"['Cheng-Yu Hsieh', 'Sibei Chen', 'Chun-Liang Li', 'Yasuhisa Fujii', 'Alexander J. Ratner', 'Chen-Yu Lee', 'Ranjay Krishna', 'Tomas Pfister']",https://arxiv.org/pdf/2308.00675,2023-08-01,,"Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.",446fb5dead075a1a08862662738f462e9a0e91c8,Semantic Scholar,,, "text and patterns for effective chain of thought, it takes two to tango","['Aman Madaan', 'A. Yazdanbakhsh']",http://arxiv.org/pdf/2209.07686,2022-09-16,,"The past decade has witnessed dramatic gains in natural language processing and an unprecedented scaling of large language models. These developments have been accelerated by the advent of few-shot techniques such as chain of thought (CoT) prompting. Specifically, CoT pushes the performance of large language models in a few-shot setup by augmenting the prompts with intermediate steps. Despite impressive results across various tasks, the reasons behind their success have not been explored. This work uses counterfactual prompting to develop a deeper understanding of CoT-based few-shot prompting mechanisms in large language models. We first systematically identify and define the key components of a prompt: symbols, patterns, and text. Then, we devise and conduct an exhaustive set of experiments across four different tasks, by querying the model with counterfactual prompts where only one of these components is altered. Our experiments across three models (PaLM, GPT-3, and CODEX) reveal several surprising findings and brings into question the conventional wisdom around few-shot prompting. First, the presence of factual patterns in a prompt is practically immaterial to the success of CoT. Second, our results conclude that the primary role of intermediate steps may not be to facilitate learning how to solve a task. The intermediate steps are rather a beacon for the model to realize what symbols to replicate in the output to form a factual answer. Further, text imbues patterns with commonsense knowledge and meaning. Our empirical and qualitative analysis reveals that a symbiotic relationship between text and patterns explains the success of few-shot prompting: text helps extract commonsense from the question to help patterns, and patterns enforce task understanding and direct text generation.",4988b3d378b79eb8669112620baf1ff4e3e536fd,Semantic Scholar,,, revisiting nonenglish text simplification a unified multilingual benchmark,"['Michael Joseph Ryan', 'Tarek Naous', 'Wei Xu']",http://arxiv.org/pdf/2305.15678,2023-05-25,,"Recent advancements in high-quality, large-scale English resources have pushed the frontier of English Automatic Text Simplification (ATS) research. However, less work has been done on multilingual text simplification due to the lack of a diverse evaluation benchmark that covers complex-simple sentence pairs in many languages. This paper introduces the MultiSim benchmark, a collection of 27 resources in 12 distinct languages containing over 1.7 million complex-simple sentence pairs. This benchmark will encourage research in developing more effective multilingual text simplification models and evaluation metrics. Our experiments using MultiSim with pre-trained multilingual language models reveal exciting performance improvements from multilingual training in non-English settings. We observe strong performance from Russian in zero-shot cross-lingual transfer to low-resource languages. We further show that few-shot prompting with BLOOM-176b achieves comparable quality to reference simplifications outperforming fine-tuned models in most languages. We validate these findings through human evaluation.",4e1a4d6804c7983c659feb7e41d49ad8c21aaa43,Semantic Scholar,,, towards informative fewshot prompt with maximum information gain for incontext learning,"['Hongfu Liu', 'Ye Wang']",https://arxiv.org/pdf/2310.08923,2023-10-13,,"Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability stemming from substantial variances induced by factors such as the input distribution of selected examples, their ordering, and prompt formats. In this work, we demonstrate that even when all these factors are held constant, the random selection of examples still results in high variance. Consequently, we aim to explore the informative ability of data examples by quantifying the Information Gain (IG) obtained in prediction after observing a given example candidate. Then we propose to sample those with maximum IG. Additionally, we identify the presence of template bias, which can lead to unfair evaluations of IG during the sampling process. To mitigate this bias, we introduce Calibration Before Sampling strategy. The experimental results illustrate that our proposed method can yield an average relative improvement of 14.3% across six classification tasks using three LLMs.",53addc28b106440a3c306b2cff8e259ad63d6d53,Semantic Scholar,,, building cooperative embodied agents modularly with large language models,"['Hongxin Zhang', 'Weihua Du', 'Jiaming Shan', 'Qinhong Zhou', 'Yilun Du', 'J. Tenenbaum', 'Tianmin Shu', 'Chuang Gan']",https://arxiv.org/pdf/2307.02485,2023-07-05,,"Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.",587352c3b95c90de6d37f061c8e117f42be0b575,Semantic Scholar,,, consprompt easily exploiting contrastive samples for fewshot prompt learning,"['Jinta Weng', 'Yue Hu', 'Zhihong Tian', 'Heyan Huang']",https://arxiv.org/pdf/2211.04118,,,"Prompt learning recently become an effective linguistic tool to motivate the PLMs’ knowledge on few-shot-setting tasks. However, studies have shown the lack of robustness still exists in prompt learning, since suitable initialization of continuous prompt and expert-first manual prompt are essential in fine-tuning process. What is more, human also utilize their comparative ability to motivate their existing knowledge for distinguishing different examples. Motivated by this, we explore how to use contrastive samples to strengthen prompt learning. In detail, we first propose our model ConsPrompt combining with prompt encoding network, contrastive sampling module, and contrastive scoring module. Subsequently, two sampling strategies, similarity-based and label-based strategies, are introduced to realize differential contrastive learning. The effectiveness of proposed ConsPrompt is demonstrated in five different few-shot learning tasks and shown the similarity-based sampling strategy is more effective than label-based in combining contrastive learning. Our results also exhibits the state-of-the-art performance and robustness in different few-shot settings, which proves that the ConsPrompt could be assumed as a better knowledge probe to motivate PLMs. As far as we could reach, this is the first work exploring how to use contrastive learning approach and suitable contrastive samples to enhance prompt-based fine-tuning.",5e3675bdbe898cb28a0fc3c2f72a578a97fe64bb,Semantic Scholar,,, can gpt3 perform statutory reasoning,"['Andrew Blair-Stanek', 'Nils Holzenberger', 'Benjamin Van Durme']",https://arxiv.org/pdf/2302.06100,2023-02-13,,"Statutory reasoning is the task of reasoning with facts and statutes, which are rules written in natural language by a legislature. It is a basic legal skill. In this paper we explore the capabilities of the most capable GPT-3 model, text-davinci-003, on an established statutory-reasoning dataset called SARA. We consider a variety of approaches, including dynamic few-shot prompting, chain-of-thought prompting, and zero-shot prompting. While we achieve results with GPT-3 that are better than the previous best published results, we also identify several types of clear errors it makes. We investigate why these errors happen. We discover that GPT-3 has imperfect prior knowledge of the actual U.S. statutes on which SARA is based. More importantly, we create simple synthetic statutes, which GPT-3 is guaranteed not to have seen during training. We find GPT-3 performs poorly at answering straightforward questions about these simple synthetic statutes.",5f5253fb15ac382e96ade0335baf1cfaa240fb1d,Semantic Scholar,,, explainable verbal reasoner plus (evr+) a natural language reasoning framework that supports diverse compositional reasoning,"['Zhengzhong Liang', 'Zeyu Zhang', 'Steven Bethard', 'M. Surdeanu']",http://arxiv.org/pdf/2305.00061,2023-04-28,,"Languages models have been successfully applied to a variety of reasoning tasks in NLP, yet the language models still suffer from compositional generalization. In this paper we present Explainable Verbal Reasoner Plus (EVR+), a reasoning framework that enhances language models' compositional reasoning ability by (1) allowing the model to explicitly generate and execute symbolic operators, and (2) allowing the model to decompose a complex task into several simpler ones in a flexible manner. Compared with its predecessor Explainable Verbal Reasoner (EVR) and other previous approaches adopting similar ideas, our framework supports more diverse types of reasoning such as nested loops and different types of recursion. To evaluate our reasoning framework, we build a synthetic dataset with five tasks that require compositional reasoning. Results show that our reasoning framework can enhance the language model's compositional generalization performance on the five tasks, using a fine-tuned language model. We also discussed the possibility and the challenges to combine our reasoning framework with a few-shot prompted language model.",5f88b907cb6b79ce22e826832f05c0471ecb095e,Semantic Scholar,,, on bilingual lexicon induction with large language models,"['Yaoyiran Li', 'Anna Korhonen', ""Ivan Vuli'c""]",https://aclanthology.org/2023.emnlp-main.595.pdf,2023-10-21,,"Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that still, to a large extent, relies on calculating cross-lingual word representations. Inspired by the global paradigm shift in NLP towards Large Language Models (LLMs), we examine the potential of the latest generation of LLMs for the development of bilingual lexicons. We ask the following research question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for BLI, and how does this approach compare against and complement current BLI approaches? To this end, we systematically study 1) zero-shot prompting for unsupervised BLI and 2) few-shot in-context prompting with a set of seed translation pairs, both without any LLM fine-tuning, as well as 3) standard BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two standard BLI benchmarks covering a range of typologically diverse languages. Our work is the first to demonstrate strong BLI capabilities of text-to-text mLLMs. The results reveal that few-shot prompting with in-context examples from nearest neighbours achieves the best performance, establishing new state-of-the-art BLI scores for many language pairs. We also conduct a series of in-depth analyses and ablation studies, providing more insights on BLI with (m)LLMs, also along with their limitations.",6036f424468a5be5dd9b427ae266b72cb8468b5f,Semantic Scholar,,, language models as knowledge bases for visual word sense disambiguation,"['Anastasia Kritharoula', 'Maria Lymperaiou', 'G. Stamou']",https://arxiv.org/pdf/2310.01960,2023-10-03,,"Visual Word Sense Disambiguation (VWSD) is a novel challenging task that lies between linguistic sense disambiguation and fine-grained multimodal retrieval. The recent advancements in the development of visiolinguistic (VL) transformers suggest some off-the-self implementations with encouraging results, which however we argue that can be further improved. To this end, we propose some knowledge-enhancement techniques towards improving the retrieval performance of VL transformers via the usage of Large Language Models (LLMs) as Knowledge Bases. More specifically, knowledge stored in LLMs is retrieved with the help of appropriate prompts in a zero-shot manner, achieving performance advancements. Moreover, we convert VWSD to a purely textual question-answering (QA) problem by considering generated image captions as multiple-choice candidate answers. Zero-shot and few-shot prompting strategies are leveraged to explore the potential of such a transformation, while Chain-of-Thought (CoT) prompting in the zero-shot setting is able to reveal the internal reasoning steps an LLM follows to select the appropriate candidate. In total, our presented approach is the first one to analyze the merits of exploiting knowledge stored in LLMs in different ways to solve WVSD.",61bbdbf481a6d3519c22513ebe8d6c3cd381851e,Semantic Scholar,,, challenging bigbench tasks and whether chainofthought can solve them,"['Mirac Suzgun', 'Nathan Scales', 'Nathanael Scharli', 'Sebastian Gehrmann', 'Yi Tay', 'Hyung Won Chung', 'Aakanksha Chowdhery', 'Quoc V. Le', 'E. Chi', 'Denny Zhou', 'Jason Wei']",http://arxiv.org/pdf/2210.09261,2022-10-17,,"BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks do language models fall short of average human-rater performance, and are those tasks actually unsolvable by current language models? In this work, we focus on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average human-rater performance on 10 of the 23 tasks, and Codex (code-davinci-002) to surpass the average human-rater performance on 17 of the 23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., 2022), substantially underestimates the best performance and capabilities of language models, which is better captured via CoT prompting. As further analysis, we explore the interaction between CoT and model scale on BBH, finding that CoT enables emergent task performance on several BBH tasks with otherwise flat scaling curves.",663a41c866d49ce052801fbc88947d39764cad29,Semantic Scholar,,, fireact toward language agent finetuning,"['Baian Chen', 'Chang Shu', 'Ehsan Shareghi', 'Nigel Collier', 'Karthik Narasimhan', 'Shunyu Yao']",https://arxiv.org/pdf/2310.05915,2023-10-09,,"Recent efforts have augmented language models (LMs) with external tools or environments, leading to the development of language agents that can reason and act. However, most of these agents rely on few-shot prompting techniques with off-the-shelf LMs. In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. Using a setup of question answering (QA) with a Google search API, we explore a variety of base LMs, prompting methods, fine-tuning data, and QA tasks, and find language agents are consistently improved after fine-tuning their backbone LMs. For example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods, and show having more diverse fine-tuning data can further improve agents. Along with other findings regarding scaling effects, robustness, generalization, efficiency and cost, our work establishes comprehensive benefits of fine-tuning LMs for agents, and provides an initial set of experimental designs, insights, as well as open questions toward language agent fine-tuning.",67daf8c4fe1958d20ebdf95c2a36dd490c73836f,Semantic Scholar,,, natural language decomposition and interpretation of complex utterances,"['Harsh Jhamtani', 'Hao Fang', 'Patrick Xia', 'Eran Levy', 'Jacob Andreas', 'Benjamin Van Durme']",http://arxiv.org/pdf/2305.08677,2023-05-15,,"Designing natural language interfaces has historically required collecting supervised data to translate user requests into carefully designed intent representations. This requires enumerating and labeling a long tail of user requests, which is challenging. At the same time, large language models (LLMs) encode knowledge about goals and plans that can help conversational assistants interpret user requests requiring numerous steps to complete. We introduce an approach to handle complex-intent-bearing utterances from a user via a process of hierarchical natural language decomposition and interpretation. Our approach uses a pre-trained language model to decompose a complex utterance into a sequence of simpler natural language steps and interprets each step using the language-to-program model designed for the interface. To test our approach, we collect and release DeCU -- a new NL-to-program benchmark to evaluate Decomposition of Complex Utterances. Experiments show that the proposed approach enables the interpretation of complex utterances with almost no complex training data, while outperforming standard few-shot prompting approaches.",68040213e9a83408cdc491ed3e235b52b537eed1,Semantic Scholar,,, meal stable and active learning for fewshot prompting,"['Abdullatif Köksal', 'Timo Schick', 'Hinrich Schutze']",http://arxiv.org/pdf/2211.08358,2022-11-15,,"Few-shot classification has made great strides due to foundation models that, through priming and prompting, are highly effective few-shot learners. However, this approach has high variance both across different sets of few shots (data selection) and across different finetuning runs (run variability). This is problematic not only because it impedes the fair comparison of different approaches, but especially because it makes few-shot learning too unreliable for many real-world applications. To alleviate these issues, we make two contributions for more stable and effective few-shot learning: First, we propose novel ensembling methods and show that they substantially reduce run variability. Second, we introduce a new active learning (AL) criterion for data selection and present the first AL-based approach specifically tailored towards prompt-based learning. In our experiments, we show that our combined method, MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning), improves overall performance of prompt-based finetuning by 2.3 points on five diverse tasks. We publicly share our code and data splits in https://github.com/akoksal/MEAL.",6a465062e88853c584148d5a9f6e319050aac0ec,Semantic Scholar,,, pal programaided language models,"['Luyu Gao', 'Aman Madaan', 'Shuyan Zhou', 'Uri Alon', 'Pengfei Liu', 'Yiming Yang', 'Jamie Callan', 'Graham Neubig']",http://arxiv.org/pdf/2211.10435,2022-11-18,,"Large language models (LLMs) have recently demonstrated an impressive ability to perform arithmetic and symbolic reasoning tasks, when provided with a few examples at test time (""few-shot prompting""). Much of this success can be attributed to prompting methods such as""chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem. While LLMs seem to be adept at this sort of step-by-step decomposition, LLMs often make logical and arithmetic mistakes in the solution part, even when the problem is decomposed correctly. In this paper, we present Program-Aided Language models (PAL): a novel approach that uses the LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a runtime such as a Python interpreter. With PAL, decomposing the natural language problem into runnable steps remains the only learning task for the LLM, while solving is delegated to the interpreter. We demonstrate this synergy between a neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all these natural language reasoning tasks, generating code using an LLM and reasoning using a Python interpreter leads to more accurate results than much larger models. For example, PAL using Codex achieves state-of-the-art few-shot accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B which uses chain-of-thought by absolute 15% top-1. Our code and data are publicly available at http://reasonwithpal.com/ .",6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7,Semantic Scholar,,, prompted llms as chatbot modules for long opendomain conversation,"['Gibbeum Lee', 'Volker Hartmann', 'Jongho Park', 'Dimitris Papailiopoulos', 'Kangwook Lee']",https://aclanthology.org/2023.findings-acl.277.pdf,2023-05-08,,"In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for creating high-quality conversational agents without the need for fine-tuning. Our method utilizes pre-trained large language models (LLMs) as individual modules for long-term consistency and flexibility, by using techniques such as few-shot prompting, chain-of-thought (CoT), and external memory. Our human evaluation results show that MPC is on par with fine-tuned chatbot models in open-domain conversations, making it an effective solution for creating consistent and engaging chatbots.",700da3f3758e053c379f905bebee261ba69f1073,Semantic Scholar,,, prompting gpt3 to be reliable,"['Chenglei Si', 'Zhe Gan', 'Zhengyuan Yang', 'Shuohang Wang', 'Jianfeng Wang', 'Jordan L. Boyd-Graber', 'Lijuan Wang']",http://arxiv.org/pdf/2210.09150,2022-10-17,,"Large language models (LLMs) show impressive abilities via few-shot prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use in real-world language applications. However, the crucial problem of how to improve the reliability of GPT-3 is still under-explored. While reliability is a broad and vaguely defined term, we decompose reliability into four main facets that correspond to the existing framework of ML safety and are well-recognized to be important: generalizability, social biases, calibration, and factuality. Our core contribution is to establish simple and effective prompts that improve GPT-3's reliability as it: 1) generalizes out-of-distribution, 2) balances demographic distribution and uses natural language instructions to reduce social biases, 3) calibrates output probabilities, and 4) updates the LLM's factual knowledge and reasoning chains. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets. We release all processed datasets, evaluation scripts, and model predictions. Our systematic empirical study not only sheds new insights on the reliability of prompting LLMs, but more importantly, our prompting strategies can help practitioners more reliably use LLMs like GPT-3.",711d5e8ddbb840ad31a9ffa3d38590603ba69a92,Semantic Scholar,,, understanding how model size affects fewshot instruction prompting,"['Ayrton San Joaquin', 'Ardy Haroen']",https://arxiv.org/pdf/2212.01907,2022-12-04,,"Large Language Models are affected by the phenomena of memorizing and forgetting their training data. But how do these vary by model size? We work towards this question by investigating how the model size affects the model's ability to discriminate a word's meaning in a given context. We introduce a dataset called DeltaWords, which evaluates a model's ability to follow instructions to select a sentence which replaces the target word with its antonym. We show a weak inverse scaling trend, where task accuracy degrades as model size increase, under extremely few-shot prompting regimes. We show that increasing the number of examples tend to disproportionately benefit larger models than smaller models.",72491b96d8a614d1a9a099707d44593d4b5a8f49,Semantic Scholar,,, smartllm smart multiagent robot task planning using large language models,"['S. S. Kannan', 'Vishnunandan L. N. Venkatesh', 'Byung-Cheol Min']",https://arxiv.org/pdf/2309.10062,2023-09-18,,"In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at https://sites.google.com/view/smart-llm/.",755853c6b30f5a186131e23a63c68a3f2737068e,Semantic Scholar,,, selfexplanation prompting improves dialogue understanding in large language models,"['Haoyu Gao', 'Ting-En Lin', 'Hangyu Li', 'Min Yang', 'Yuchuan Wu', 'Wentao Ma', 'Yongbin Li']",https://arxiv.org/pdf/2309.12940,2023-09-22,,"Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel""Self-Explanation""prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks.",75ce9634d281cc12cbe434f86c737df8e10796fa,Semantic Scholar,,, chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt,"['Fatemeh Nazary', 'Yashar Deldjoo', 'T. D. Noia']",https://arxiv.org/pdf/2308.09731,2023-08-17,,"This study presents an innovative approach to the application of large language models (LLMs) in clinical decision-making, focusing on OpenAI's ChatGPT. Our approach introduces the use of contextual prompts-strategically designed to include task description, feature description, and crucially, integration of domain knowledge-for high-quality binary classification tasks even in data-scarce scenarios. The novelty of our work lies in the utilization of domain knowledge, obtained from high-performing interpretable ML models, and its seamless incorporation into prompt design. By viewing these ML models as medical experts, we extract key insights on feature importance to aid in decision-making processes. This interplay of domain knowledge and AI holds significant promise in creating a more insightful diagnostic tool. Additionally, our research explores the dynamics of zero-shot and few-shot prompt learning based on LLMs. By comparing the performance of OpenAI's ChatGPT with traditional supervised ML models in different data conditions, we aim to provide insights into the effectiveness of prompt engineering strategies under varied data availability. In essence, this paper bridges the gap between AI and healthcare, proposing a novel methodology for LLMs application in clinical decision support systems. It highlights the transformative potential of effective prompt design, domain knowledge integration, and flexible learning approaches in enhancing automated decision-making.",793eb805800c4af0b06260079e178efa0377b9d7,Semantic Scholar,,, transferring procedural knowledge across commonsense tasks,"['Yifan Jiang', 'Filip Ilievski', 'Kaixin Ma']",https://arxiv.org/pdf/2304.13867,2023-04-26,,"Stories about everyday situations are an essential part of human communication, motivating the need to develop AI agents that can reliably understand these stories. Despite the long list of supervised methods for story completion and procedural understanding, current AI has no mechanisms to automatically track and explain procedures in unseen stories. To bridge this gap, we study the ability of AI models to transfer procedural knowledge to novel narrative tasks in a transparent manner. We design LEAP: a comprehensive framework that integrates state-of-the-art modeling architectures, training regimes, and augmentation strategies based on both natural and synthetic stories. To address the lack of densely annotated training data, we devise a robust automatic labeler based on few-shot prompting to enhance the augmented data. Our experiments with in- and out-of-domain tasks reveal insights into the interplay of different architectures, training regimes, and augmentation strategies. LEAP's labeler has a clear positive impact on out-of-domain datasets, while the resulting dense annotation provides native explainability.",7beec352ac2597c3cd3dc7aceb2f8cd068b72d15,Semantic Scholar,,, exploring the landscape of distributional robustness for question answering models,"['Anas Awadalla', 'Mitchell Wortsman', 'Gabriel Ilharco', 'Sewon Min', 'Ian H. Magnusson', 'Hannaneh Hajishirzi', 'Ludwig Schmidt']",http://arxiv.org/pdf/2210.12517,2022-10-22,,"We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets, including a diverse set of architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter tuning, in-context learning, etc.). We find that, in many cases, model variations do not affect robustness and in-distribution performance alone determines out-of-distribution performance. Moreover, our findings indicate that i) zero-shot and in-context learning methods are more robust to distribution shifts than fully fine-tuned models; ii) few-shot prompt fine-tuned models exhibit better robustness than few-shot fine-tuned span prediction models; iii) parameter-efficient and robustness enhancing training methods provide no significant robustness improvements. In addition, we publicly release all evaluations to encourage researchers to further analyze robustness trends for question answering models.",7cf4f8cb8b4a373d869e785b79160dda7a49a250,Semantic Scholar,,, language models don't always say what they think unfaithful explanations in chainofthought prompting,"['Miles Turpin', 'Julian Michael', 'Ethan Perez', 'Sam Bowman']",http://arxiv.org/pdf/2305.04388,2023-05-07,,"Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always""(A)""--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.",7dc928f41e15f65f1267bd87b0fcfcc7e715cb56,Semantic Scholar,,, zara improving fewshot selfrationalization for small language models,"['Wei-Lin Chen', 'An-Zi Yen', 'Hen-Hsen Huang', 'Cheng-Kuang Wu', 'Hsin-Hsi Chen']",http://arxiv.org/pdf/2305.07355,2023-05-12,,"Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA's ability to automatically identify plausible and accurate rationale-answer pairs.",7df3595bdb4003589e8ca1757cc39ec03a39a2ff,Semantic Scholar,,, natural language to code generation in interactive data science notebooks,"['Pengcheng Yin', 'Wen-Ding Li', 'Kefan Xiao', 'A. Rao', 'Yeming Wen', 'Kensen Shi', 'Joshua Howland', 'Paige Bailey', 'Michele Catasta', 'H. Michalewski', 'Oleksandr Polozov', 'Charles Sutton']",http://arxiv.org/pdf/2212.09248,2022-12-19,,"Computational notebooks, such as Jupyter notebooks, are interactive computing environments that are ubiquitous among data scientists to perform data wrangling and analytic tasks. To measure the performance of AI pair programmers that automatically synthesize programs for those tasks given natural language (NL) intents from users, we build ARCADE, a benchmark of 1078 code generation problems using the pandas data analysis framework in data science notebooks. ARCADE features multiple rounds of NL-to-code problems from the same notebook. It requires a model to understand rich multi-modal contexts, such as existing notebook cells and their execution states as well as previous turns of interaction. To establish a strong baseline on this challenging task, we develop PaChiNCo, a 62B code language model (LM) for Python computational notebooks, which significantly outperforms public code LMs. Finally, we explore few-shot prompting strategies to elicit better code with step-by-step decomposition and NL explanation, showing the potential to improve the diversity and explainability of model predictions. Arcade is publicly available at https://github.com/google-research/arcade-nl2code/.",815c6ca281536d18ec0eb408b6e46e72a0826163,Semantic Scholar,,, multiparty chat conversational agents in group settings with humans and models,"['Jimmy Wei', 'Kurt Shuster', 'Arthur Szlam', 'J. Weston', 'Jack Urbanek', 'M. Komeili']",http://arxiv.org/pdf/2304.13835,2023-04-26,,"Current dialogue research primarily studies pairwise (two-party) conversations, and does not address the everyday setting where more than two speakers converse together. In this work, we both collect and evaluate multi-party conversations to study this more general case. We use the LIGHT environment to construct grounded conversations, where each participant has an assigned character to role-play. We thus evaluate the ability of language models to act as one or more characters in such conversations. Models require two skills that pairwise-trained models appear to lack: (1) being able to decide when to talk; (2) producing coherent utterances grounded on multiple characters. We compare models trained on our new dataset to existing pairwise-trained dialogue models, as well as large language models with few-shot prompting. We find that our new dataset, MultiLIGHT, which we will publicly release, can help bring significant improvements in the group setting.",82beb8a86d438e85a134182128d47607b1b04004,Semantic Scholar,,, can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning,"['Mohamed Aghzal', 'E. Plaku', 'Ziyu Yao']",https://arxiv.org/pdf/2310.03249,2023-10-05,,"Large language models (LLMs) have achieved remarkable success across a wide spectrum of tasks; however, they still face limitations in scenarios that demand long-term planning and spatial reasoning. To facilitate this line of research, in this work, we propose a new benchmark, termed $\textbf{P}$ath $\textbf{P}$lanning from $\textbf{N}$atural $\textbf{L}$anguage ($\textbf{PPNL}$). Our benchmark evaluates LLMs' spatial-temporal reasoning by formulating ''path planning'' tasks that require an LLM to navigate to target locations while avoiding obstacles and adhering to constraints. Leveraging this benchmark, we systematically investigate LLMs including GPT-4 via different few-shot prompting methodologies as well as BART and T5 of various sizes via fine-tuning. Our experimental results show the promise of few-shot GPT-4 in spatial reasoning, when it is prompted to reason and act interleavedly, although it still fails to perform long-term temporal reasoning. In contrast, while fine-tuned LLMs achieved impressive results on in-distribution reasoning tasks, they struggled to generalize to larger environments or environments with more obstacles.",831b87798ceeee4e5f600a45bce717111ecefa06,Semantic Scholar,,, towards legally enforceable hate speech detection for public forums,"['Chunyan Luo', 'R. Bhambhoria', 'Xiao-Dan Zhu', 'Samuel Dahan']",http://arxiv.org/pdf/2305.13677,2023-05-23,,"Hate speech causes widespread and deep-seated societal issues. Proper enforcement of hate speech laws is key for protecting groups of people against harmful and discriminatory language. However, determining what constitutes hate speech is a complex task that is highly open to subjective interpretations. Existing works do not align their systems with enforceable definitions of hate speech, which can make their outputs inconsistent with the goals of regulators. This research introduces a new perspective and task for enforceable hate speech detection centred around legal definitions, and a dataset annotated on violations of eleven possible definitions by legal experts. Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set. We experiment with grounding the model decision in these definitions using zero-shot and few-shot prompting. We then report results on several large language models (LLMs). With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.",895f3c9e452ae51fb02786de424ce6d2bba11c3b,Semantic Scholar,,, usb a unified summarization benchmark across tasks and domains,"['Kundan Krishna', 'Prakhar Gupta', 'S. Ramprasad', 'Byron C. Wallace', 'Jeffrey P. Bigham', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2305.14296,2023-05-23,,"While the NLP community has produced numerous summarization benchmarks, none provide the rich annotations required to simultaneously address many important problems related to control and reliability. We introduce a Wikipedia-derived benchmark, complemented by a rich set of crowd-sourced annotations, that supports $8$ interrelated tasks: (i) extractive summarization; (ii) abstractive summarization; (iii) topic-based summarization; (iv) compressing selected sentences into a one-line summary; (v) surfacing evidence for a summary sentence; (vi) predicting the factual accuracy of a summary sentence; (vii) identifying unsubstantiated spans in a summary sentence; (viii) correcting factual errors in summaries. We compare various methods on this benchmark and discover that on multiple tasks, moderately-sized fine-tuned models consistently outperform much larger few-shot prompted language models. For factuality-related tasks, we also evaluate existing heuristics to create training data and find that training on them results in worse performance than training on $20\times$ less human-labeled data. Our articles draw from $6$ domains, facilitating cross-domain analysis. On some tasks, the amount of training data matters more than the domain where it comes from, while for other tasks training specifically on data from the target domain, even if limited, is more beneficial.",8ab27849799286459465d2262f926354093b20a9,Semantic Scholar,,, grounding language with visual affordances over unstructured data,"['Oier Mees', 'Jessica Borja-Diaz', 'Wolfram Burgard']",https://arxiv.org/pdf/2210.01911,2022-10-04,,"Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills. However, in practice, learning multi-task, language-conditioned robotic skills typically requires large-scale data collection and frequent human intervention to reset the environment or help correcting the current policies. In this work, we propose a novel approach to efficiently learn general-purpose language-conditioned robot skills from unstructured, offline and reset-free data in the real world by exploiting a self-supervised visuo-lingual affordance model, which requires annotating as little as 1% of the total data with language. We evaluate our method in extensive experiments both in simulated and real-world robotic tasks, achieving state-of-the-art performance on the challenging CALVIN benchmark and learning over 25 distinct visuomotor manipulation tasks with a single policy in the real world. We find that when paired with LLMs to break down abstract natural language instructions into subgoals via few-shot prompting, our method is capable of completing long-horizon, multi-tier tasks in the real world, while requiring an order of magnitude less data than previous approaches. Code and videos are available at http://hulc2.cs.uni-freiburg.de.",8f84dcbad8cd3b5b4d9229c56bc95f24be859a35,Semantic Scholar,,, evaluating large language models on graphs performance insights and comparative analysis,"['Chang Liu', 'Bo Wu']",https://arxiv.org/pdf/2308.11224,2023-08-22,,"Large Language Models (LLMs) have garnered considerable interest within both academic and industrial. Yet, the application of LLMs to graph data remains under-explored. In this study, we evaluate the capabilities of four LLMs in addressing several analytical problems with graph data. We employ four distinct evaluation metrics: Comprehension, Correctness, Fidelity, and Rectification. Our results show that: 1) LLMs effectively comprehend graph data in natural language and reason with graph topology. 2) GPT models can generate logical and coherent results, outperforming alternatives in correctness. 3) All examined LLMs face challenges in structural reasoning, with techniques like zero-shot chain-of-thought and few-shot prompting showing diminished efficacy. 4) GPT models often produce erroneous answers in multi-answer tasks, raising concerns in fidelity. 5) GPT models exhibit elevated confidence in their outputs, potentially hindering their rectification capacities. Notably, GPT-4 has demonstrated the capacity to rectify responses from GPT-3.5-turbo and its own previous iterations. The code is available at: https://github.com/Ayame1006/LLMtoGraph.",927fc7652e033c9eb17296df087e3e6491112bb0,Semantic Scholar,,, revisiting relation extraction in the era of large language models,"['Somin Wadhwa', 'Silvio Amir', 'Byron C. Wallace']",http://arxiv.org/pdf/2305.05003,2023-05-08,,"Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a sequence-to-sequence task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing fully supervised models; (2) Flan-T5 is not as capable in the few-shot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks.",97782a67971c4ff1a74bf07e82fe20b2c4bf86c4,Semantic Scholar,,, selfpolish enhance reasoning in large language models via problem refinement,"['Zhiheng Xi', 'Senjie Jin', 'Yuhao Zhou', 'Rui Zheng', 'Songyang Gao', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang']",http://arxiv.org/pdf/2305.14497,2023-05-23,,"Prompting methods such as Chain-of-Thought (CoT) have shed new light on enhancing the reasoning capabilities of large language models, and researchers have extensively explored the generation process of rationales and answers. However, they have overlooked the potential challenges posed by the poor quality of reasoning problems, which may influence the reasoning performance significantly. In this work, we propose Self-Polish (SP), a novel method that facilitates the model's problem-solving process by prompting them to progressively refine the given problems to be more comprehensible and solvable. Specifically, the method teaches models to eliminate irrelevant information, rearrange the logic structure and organize local conditions into new ones parallelly. SP is orthogonal to all other prompting methods, making it convenient to integrate with state-of-the-art techniques for further improvement. We conduct thorough experiments on five benchmarks to illustrate the effectiveness of the proposed method. For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8.0\%$ on GSM8K and $17.8\%$ on MultiArith; it also improves the performance of CoT by $6.0\%$ on GSM8K and $6.0\%$ on MathQA, respectively. Furthermore, our method also showcases impressive performance on robustness evaluation.",9a9b1e2968302eb882870537d4af6e2c722dfd1a,Semantic Scholar,,, spotlight mobile ui understanding using visionlanguage models with a focus,"['Gang Li', 'Yang Li']",http://arxiv.org/pdf/2209.14927,2022-09-29,,"Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen -- the focus -- as the input. This general architecture of Spotlight is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multi-task learning and few-shot prompting capacities of the proposed models, demonstrating promising results in the multi-task learning direction.",9b9fb973e5d3b413baa90648d9eb0743bd889747,Semantic Scholar,,, large language model prompt chaining for long legal document classification,['Dietrich Trautmann'],https://arxiv.org/pdf/2308.04138,2023-08-08,,"Prompting is used to guide or steer a language model in generating an appropriate response that is consistent with the desired outcome. Chaining is a strategy used to decompose complex tasks into smaller, manageable components. In this study, we utilize prompt chaining for extensive legal document classification tasks, which present difficulties due to their intricate domain-specific language and considerable length. Our approach begins with the creation of a concise summary of the original document, followed by a semantic search for related exemplar texts and their corresponding annotations from a training corpus. Finally, we prompt for a label - based on the task - to assign, by leveraging the in-context learning from the few-shot prompt. We demonstrate that through prompt chaining, we can not only enhance the performance over zero-shot, but also surpass the micro-F1 score achieved by larger models, such as ChatGPT zero-shot, using smaller models.",9bf587d032e3764720cccd5beaf941f5c32880bc,Semantic Scholar,,, lafter labelfree tuning of zeroshot classifier using language and unlabeled image collections,"['M. J. Mirza', 'Leonid Karlinsky', 'Wei Lin', 'M. Koziński', 'Horst Possegger', 'R. Feris', 'H. Bischof']",http://arxiv.org/pdf/2305.18287,2023-05-29,,"Recently, large-scale pre-trained Vision and Language (VL) models have set a new state-of-the-art (SOTA) in zero-shot visual classification enabling open-vocabulary recognition of potentially unlimited set of categories defined as simple language prompts. However, despite these great advances, the performance of these zeroshot classifiers still falls short of the results of dedicated (closed category set) classifiers trained with supervised fine tuning. In this paper we show, for the first time, how to reduce this gap without any labels and without any paired VL data, using an unlabeled image collection and a set of texts auto-generated using a Large Language Model (LLM) describing the categories of interest and effectively substituting labeled visual instances of those categories. Using our label-free approach, we are able to attain significant performance improvements over the zero-shot performance of the base VL model and other contemporary methods and baselines on a wide variety of datasets, demonstrating absolute improvement of up to 11.7% (3.8% on average) in the label-free setting. Moreover, despite our approach being label-free, we observe 1.3% average gains over leading few-shot prompting baselines that do use 5-shot supervision.",a04883d1d780b438de6c127caf7ebe3d9233e193,Semantic Scholar,,, small language models improve giants by rewriting their outputs,"['Giorgos Vernikos', 'Arthur Bravzinskas', 'Jakub Adamek', 'Jonathan Mallinson', 'Aliaksei Severyn', 'Eric Malmi']",http://arxiv.org/pdf/2305.13514,2023-05-22,,"Despite the impressive performance of large language models (LLMs), they often lag behind specialized models in various tasks. LLMs only use a fraction of the existing training data for in-context learning, while task-specific models harness the full dataset for fine-tuning. In this work, we tackle the problem of leveraging training data to improve the performance of LLMs without fine-tuning. Our approach directly targets LLM predictions without requiring access to their weights. We create a pool of candidates from the LLM through few-shot prompting and we employ a compact model, the LM-corrector (LMCor), specifically trained to merge these candidates to produce an enhanced output. Our experiments on four natural language generation tasks demonstrate that even a small LMCor model (250M) substantially improves the few-shot performance of LLMs (62B), matching and even outperforming standard fine-tuning. Furthermore, we illustrate the robustness of LMCor against different prompts, thereby minimizing the need for extensive prompt engineering. Finally, we show that LMCor can be seamlessly integrated with different LLMs at inference, serving as a plug-and-play module to improve their performance.",a21de70160c91dcf9b1e7a93fbb32f4b2687860a,Semantic Scholar,,, street a multitask structured reasoning and explanation benchmark,"['D. Ribeiro', 'Shen Wang', 'Xiaofei Ma', 'He Zhu', 'Rui Dong', 'Deguang Kong', 'Juliette Burger', 'Anjelica Ramos', 'William Yang Wang', 'Zhiheng Huang', 'G. Karypis', 'Bing Xiang', 'D. Roth']",http://arxiv.org/pdf/2302.06729,2023-02-13,,"We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark. Unlike most existing question-answering (QA) datasets, we expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer. We perform extensive evaluation with popular language models such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models still lag behind human performance when producing such structured reasoning steps. We believe this work will provide a way for the community to better train and test systems on multi-step reasoning and explanations in natural language.",a3a241e9397fe29b37f96cb5e8f4b8bebed3d3da,Semantic Scholar,,, large language models as tax attorneys a case study in legal capabilities emergence,"['John J. Nay', 'David Karamardian', 'Sarah Lawsky', 'Wenting Tao', 'Meghana Moorthy Bhat', 'Raghav Jain', 'Aaron Travis Lee', 'Jonathan H. Choi', 'Jungo Kasai']",http://arxiv.org/pdf/2306.07075,2023-06-12,,"Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.",a6a0963fcf21ed47a2616ca3980f8f4f21e6d5ad,Semantic Scholar,,, distilling stepbystep! outperforming larger language models with less training data and smaller model sizes,"['Cheng-Yu Hsieh', 'Chun-Liang Li', 'Chih-Kuan Yeh', 'Hootan Nakhost', 'Yasuhisa Fujii', 'Alexander J. Ratner', 'Ranjay Krishna', 'Chen-Yu Lee', 'Tomas Pfister']",https://arxiv.org/pdf/2305.02301,2023-05-03,,"Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for training small models within a multi-task framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to few-shot prompted LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our finetuned 770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80% of available data on a benchmark, whereas standard finetuning the same T5 model struggles to match even by using 100% of the dataset. We release the code at: https://github.com/google-research/distilling-step-by-step .",aad167be3c902388ea625da4117fcae4325b8b7d,Semantic Scholar,,, prompt programming for large language models beyond the fewshot paradigm,"['Laria Reynolds', 'Kyle McDonell']",https://arxiv.org/pdf/2102.07350,2021-02-15,,"Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.",ac3cdb50606f7770eef8e4cd951840a4f71287a0,Semantic Scholar,,, the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant,"['Jingqing Zhang', 'Kai Sun', 'A. Jagadeesh', 'Mahta Ghahfarokhi', 'Deepa Gupta', 'Ashok Gupta', 'Vibhor Gupta', 'Yike Guo']",https://arxiv.org/pdf/2307.08152,2023-07-16,,"Recent studies have demonstrated promising performance of ChatGPT and GPT-4 on several medical domain tasks. However, none have assessed its performance using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical diagnostic assistance for patients across a full range of disease presentation. We performed two analyses using ChatGPT and GPT-4, one to identify patients with specific medical diagnoses using a real-world large electronic health record database and the other, in providing diagnostic assistance to healthcare workers in the prospective evaluation of hypothetical patients. Our results show that GPT-4 across disease classification tasks with chain of thought and few-shot prompting can achieve performance as high as 96% F1 scores. For patient assessment, GPT-4 can accurately diagnose three out of four times. However, there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary investigations and overtreatment. These issues coupled with privacy concerns, make these models currently inadequate for real world clinical use. However, limited data and time needed for prompt engineering in comparison to configuration of conventional machine learning workflows highlight their potential for scalability across healthcare applications.",b3d6fec3f1a878b0c612f0ffed820b045c2c46d8,Semantic Scholar,,, do gpts produce less literal translations,"['Vikas Raunak', 'Arul Menezes', 'Matt Post', 'Hany Hassan Awadallah']",http://arxiv.org/pdf/2305.16806,2023-05-26,,"Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.",b4170009de40c1c46adea6a314734434ecd4b0dc,Semantic Scholar,,, adelt transpilation between deep learning frameworks,"['Linyuan Gong', 'Jiayi Wang', 'Alvin Cheung']",http://arxiv.org/pdf/2303.03593,2023-03-07,,"We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source transpilation between deep learning frameworks. Unlike prior approaches, we decouple the transpilation of code skeletons and the mapping of API keywords (an API function name or a parameter name). ADELT transpile code skeletons using few-shot prompting on big language models. Based on contextual embeddings extracted by a BERT for code, we train aligned API embeddings in a domain-adversarial setup, upon which we generate a dictionary for keyword translation. The model is trained on our unlabeled DL corpus from web crawl data, without using any hand-crafted rules and parallel data. Our method outperforms state-of-the-art transpilers on multiple transpilation pairs including PyTorch-Keras and PyTorch-MXNet by 15.9pts and 12.0pts in exact match scores respectively.",b6bea98ca29267acbebca6cdf64eb07a5671e000,Semantic Scholar,,, decomposed prompting for machine translation between related languages using large language models,"['Ratish Puduppully', 'Raj Dabre', 'A. Aw', 'Nancy F. Chen']",http://arxiv.org/pdf/2305.13085,2023-05-22,,"This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for test sentences. This procedure requires the model to learn how to generate translations while simultaneously ensuring that token ordering is maintained to produce a fluent and accurate translation. We propose that for related languages, the task of machine translation can be simplified by leveraging the monotonic alignment characteristic of such languages. We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations. Through automatic and human evaluation conducted on multiple related language pairs across various language families, we demonstrate that our proposed approach of decomposed prompting surpasses multiple established few-shot baseline approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.",b6e5855b6a4e425ba251a93516f2bccffe5ba403,Semantic Scholar,,, prompt a robot to walk with large language models,"['Yen-Jen Wang', 'Bike Zhang', 'Jianyu Chen', 'K. Sreenath']",https://arxiv.org/pdf/2309.09969,2023-09-18,,"Large language models (LLMs) pre-trained on vast internet-scale data have showcased remarkable capabilities across diverse domains. Recently, there has been escalating interest in deploying LLMs for robotics, aiming to harness the power of foundation models in real-world settings. However, this approach faces significant challenges, particularly in grounding these models in the physical world and in generating dynamic robot motions. To address these issues, we introduce a novel paradigm in which we use few-shot prompts collected from the physical environment, enabling the LLM to autoregressively generate low-level control commands for robots without task-specific fine-tuning. Experiments across various robots and environments validate that our method can effectively prompt a robot to walk. We thus illustrate how LLMs can proficiently function as low-level feedback controllers for dynamic motion control even in high-dimensional robotic systems. The project website and source code can be found at: https://prompt2walk.github.io/ .",b70075b496c1f519093884945be5670c32cbceed,Semantic Scholar,,, freshllms refreshing large language models with search engine augmentation,"['Tu Vu', 'Mohit Iyyer', 'Xuezhi Wang', 'Noah Constant', 'Jerry Wei', 'Jason Wei', 'Chris Tar', 'Yun-Hsuan Sung', 'Denny Zhou', 'Quoc Le', 'Thang Luong']",https://arxiv.org/pdf/2310.03214,2023-10-05,,"Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.",be177300487b6d0f25e6cade9a31900454b13281,Semantic Scholar,,, enhancing incontext learning with answer feedback for multispan question answering,"['Zixian Huang', 'Jiaying Zhou', 'Gengyang Xiao', 'Gong Cheng']",http://arxiv.org/pdf/2306.04508,2023-06-07,,"Whereas the recent emergence of large language models (LLMs) like ChatGPT has exhibited impressive general performance, it still has a large gap with fully-supervised models on specific tasks such as multi-span question answering. Previous researches found that in-context learning is an effective approach to exploiting LLM, by using a few task-related labeled data as demonstration examples to construct a few-shot prompt for answering new questions. A popular implementation is to concatenate a few questions and their correct answers through simple templates, informing LLM of the desired output. In this paper, we propose a novel way of employing labeled data such that it also informs LLM of some undesired output, by extending demonstration examples with feedback about answers predicted by an off-the-shelf model, e.g., correct, incorrect, or incomplete. Experiments on three multi-span question answering datasets as well as a keyphrase extraction dataset show that our new prompting strategy consistently improves LLM's in-context learning performance.",c1647923704251875f4160e91b59afbbdc58483e,Semantic Scholar,,, internetaugmented language models through fewshot prompting for opendomain question answering,"['Angeliki Lazaridou', 'E. Gribovskaya', 'Wojciech Stokowiec', 'N. Grigorev']",https://arxiv.org/pdf/2203.05115,2022-03-10,,"In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language models (LSLMs) to overcome some of their challenges with respect to grounding to factual and up-to-date information. Motivated by semi-parametric language models (LMs), which ground their decisions in external retrieved evidence, we use few-shot prompting to learn to condition LMs on information returned from the web using Google Search, a broad and constantly updated knowledge source. Our approach does not involve fine-tuning or learning additional parameters, thus making it applicable to any LM, offering therefore a strong baseline. Indeed, we find that LMs conditioned on the web surpass performance of closed-book models of similar, or even larger, model sizes in open-domain question answering. Finally, we find that increasing the inference-time compute of models, achieved via using multiple retrieved evidences to generate multiple answers followed by a reranking stage that uses scores generated by the same LMs, leads to better performance and alleviates lower performance of smaller few-shot LMs. All in all, our findings suggest that it might be beneficial to slow down the race towards the biggest model and instead shift attention towards finding more effective ways to use models, including but not limited to, better prompting or increasing inference-time compute.",c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd,Semantic Scholar,,, is chatgpt a good recommender a preliminary study,"['Junling Liu', 'Chaoyong Liu', 'Renjie Lv', 'Kangdi Zhou', 'Y. Zhang']",http://arxiv.org/pdf/2304.10149,2023-04-20,,"Recommendation systems have witnessed significant advancements and have been widely used over the past decades. However, most traditional recommendation methods are task-specific and therefore lack efficient generalization ability. Recently, the emergence of ChatGPT has significantly advanced NLP tasks by enhancing the capabilities of conversational models. Nonetheless, the application of ChatGPT in the recommendation domain has not been thoroughly investigated. In this paper, we employ ChatGPT as a general-purpose recommendation model to explore its potential for transferring extensive linguistic and world knowledge acquired from large-scale corpora to recommendation scenarios. Specifically, we design a set of prompts and evaluate ChatGPT's performance on five recommendation scenarios. Unlike traditional recommendation methods, we do not fine-tune ChatGPT during the entire evaluation process, relying only on the prompts themselves to convert recommendation tasks into natural language tasks. Further, we explore the use of few-shot prompting to inject interaction information that contains user potential interest to help ChatGPT better understand user needs and interests. Comprehensive experimental results on Amazon Beauty dataset show that ChatGPT has achieved promising results in certain tasks and is capable of reaching the baseline level in others. We conduct human evaluations on two explainability-oriented tasks to more accurately evaluate the quality of contents generated by different models. And the human evaluations show ChatGPT can truly understand the provided information and generate clearer and more reasonable results. We hope that our study can inspire researchers to further explore the potential of language models like ChatGPT to improve recommendation performance and contribute to the advancement of the recommendation systems field.",ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3,Semantic Scholar,,, legal prompting teaching a language model to think like a lawyer,"['Fang Yu', 'Lee Quartey', 'Frank Schilder']",http://arxiv.org/pdf/2212.01326,2022-12-02,,"Large language models that are capable of zero or few-shot prompting approaches have given rise to the new research area of prompt engineering. Recent advances showed that for example Chain-of-Thought (CoT) prompts can improve arithmetic or common sense tasks significantly. We explore how such approaches fare with legal reasoning tasks and take the COLIEE entailment task based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning approaches. Our findings show that while CoT prompting and fine-tuning with explanations approaches show improvements, the best results are produced by prompts that are derived from specific legal reasoning techniques such as IRAC (Issue, Rule, Application, Conclusion). Based on our experiments we improve the 2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best system of 0.6789 accuracy with an accuracy of 0.7431.",cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3,Semantic Scholar,,, query2doc query expansion with large language models,"['Liang Wang', 'Nan Yang', 'Furu Wei']",https://arxiv.org/pdf/2303.07678,2023-03-14,,"This paper introduces a simple yet effective query expansion approach, denoted as query2doc, to improve both sparse and dense retrieval systems. The proposed method first generates pseudo-documents by few-shot prompting large language models (LLMs), and then expands the query with generated pseudo-documents. LLMs are trained on web-scale text corpora and are adept at knowledge memorization. The pseudo-documents from LLMs often contain highly relevant information that can aid in query disambiguation and guide the retrievers. Experimental results demonstrate that query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets, such as MS-MARCO and TREC DL, without any model fine-tuning. Furthermore, our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.",ccc772d88c231275f24c4fac9b28bbe0942e1107,Semantic Scholar,,, how to design translation prompts for chatgpt an empirical study,"['Yuan Gao', 'Ruili Wang', 'Feng Hou']",http://arxiv.org/pdf/2304.02182,2023-04-05,,"The recently released ChatGPT has demonstrated surprising abilities in natural language understanding and natural language generation. Machine translation relies heavily on the abilities of language understanding and generation. Thus, in this paper, we explore how to assist machine translation with ChatGPT. We adopt several translation prompts on a wide range of translations. Our experimental results show that ChatGPT with designed translation prompts can achieve comparable or better performance over commercial translation systems for high-resource language translations. We further evaluate the translation quality using multiple references, and ChatGPT achieves superior performance compared to commercial systems. We also conduct experiments on domain-specific translations, the final results show that ChatGPT is able to comprehend the provided domain keyword and adjust accordingly to output proper translations. At last, we perform few-shot prompts that show consistent improvement across different base prompts. Our work provides empirical evidence that ChatGPT still has great potential in translations.",cd77ea482d9245f3fcaeb670261a00c3fb5cabbd,Semantic Scholar,,, passive learning of active causal strategies in agents and language models,"['Andrew Kyle Lampinen', 'Stephanie C. Y. Chan', 'Ishita Dasgupta', 'A. Nam', 'Jane X. Wang']",https://arxiv.org/pdf/2305.16183,2023-05-25,,"What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited. However, we show that purely passive learning can in fact allow an agent to learn generalizable strategies for determining and using causal structures, as long as the agent can intervene at test time. We formally illustrate that learning a strategy of first experimenting, then seeking goals, can allow generalization from passive learning in principle. We then show empirically that agents trained via imitation on expert data can indeed generalize at test time to infer and use causal links which are never present in the training data; these agents can also generalize experimentation strategies to novel variable sets never observed in training. We then show that strategies for causal intervention and exploitation can be generalized from passive data even in a more complex environment with high-dimensional observations, with the support of natural language explanations. Explanations can even allow passive learners to generalize out-of-distribution from perfectly-confounded training data. Finally, we show that language models, trained only on passive next-word prediction, can generalize causal intervention strategies from a few-shot prompt containing examples of experimentation, together with explanations and reasoning. These results highlight the surprising power of passive learning of active causal strategies, and may help to understand the behaviors and capabilities of language models.",ce0154d9251f67c262512b6e598f3aa3ba9fe9a4,Semantic Scholar,,, diversity measures domainindependent proxies for failure in language model queries,"['Noel Ngu', 'Nathaniel Lee', 'P. Shakarian']",https://arxiv.org/pdf/2308.11189,2023-08-22,,"Error prediction in large language models often relies on domain-specific information. In this paper, we present measures for quantification of error in the response of a large language model based on the diversity of responses to a given prompt - hence independent of the underlying application. We describe how three such measures - based on entropy, Gini impurity, and centroid distance - can be employed. We perform a suite of experiments on multiple datasets and temperature settings to demonstrate that these measures strongly correlate with the probability of failure. Additionally, we present empirical results demonstrating how these measures can be applied to few-shot prompting, chain-of-thought reasoning, and error detection.",d4fc988c6510420a5290dfe8d1a991ca4878d696,Semantic Scholar,,, log parsing how far can chatgpt go,"['Van-Hoang Le', 'Hongyu Zhang']",https://arxiv.org/pdf/2306.01590,2023-06-02,,"Software logs play an essential role in ensuring the reliability and maintainability of large-scale software systems, as they are often the sole source of runtime information. Log parsing, which converts raw log messages into structured data, is an important initial step towards downstream log analytics. In recent studies, ChatGPT, the current cutting-edge large language model (LLM), has been widely applied to a wide range of software engineering tasks. However, its performance in automated log parsing remains unclear. In this paper, we evaluate ChatGPT's ability to undertake log parsing by addressing two research questions. (1) Can ChatGPT effectively parse logs? (2) How does ChatGPT perform with different prompting methods? Our results show that ChatGPT can achieve promising results for log parsing with appropriate prompts, especially with few-shot prompting. Based on our findings, we outline several challenges and opportunities for ChatGPT-based log parsing.",d589c49e1cd1dd3b994dcac01b4c6e7fb8eef161,Semantic Scholar,,, an empirical evaluation of prompting strategies for large language models in zeroshot clinical natural language processing,"['S. Sivarajkumar', 'Mark Kelley', 'Alyssa Samolyk-Mazzanti', 'S. Visweswaran', 'Yanshan Wang']",https://arxiv.org/pdf/2309.08008,2023-09-14,,"Large language models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), especially in domains where labeled data is scarce or expensive, such as clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches. In this paper, we present a comprehensive and systematic experimental study on prompt engineering for five clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence Extraction, Coreference Resolution, Medication Status Extraction, and Medication Attribute Extraction. We assessed the prompts proposed in recent literature, including simple prefix, simple cloze, chain of thought, and anticipatory prompts, and introduced two new types of prompts, namely heuristic prompting and ensemble prompting. We evaluated the performance of these prompts on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted zero-shot prompting with few-shot prompting, and provide novel insights and guidelines for prompt engineering for LLMs in clinical NLP. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.",d5a6fc6aa139066e3b66ba63002e7d84c109aebc,Semantic Scholar,,, mindagent emergent gaming interaction,"['Ran Gong', 'Qiuyuan Huang', 'Xiaojian Ma', 'Hoi Vo', 'Zane Durante', 'Yusuke Noda', 'Zilong Zheng', 'Song-Chun Zhu', 'Demetri Terzopoulos', 'Fei-Fei Li', 'Jianfeng Gao']",https://arxiv.org/pdf/2309.09971,2023-09-18,,"Large Language Models (LLMs) have the capacity of performing complex scheduling in a multi-agent system and can coordinate these agents into completing sophisticated tasks that require extensive collaboration. However, despite the introduction of numerous gaming frameworks, the community has insufficient benchmarks towards building general multi-agents collaboration infrastructure that encompass both LLM and human-NPCs collaborations. In this work, we propose a novel infrastructure - MindAgent - to evaluate planning and coordination emergent capabilities for gaming interaction. In particular, our infrastructure leverages existing gaming framework, to i) require understanding of the coordinator for a multi-agent system, ii) collaborate with human players via un-finetuned proper instructions, and iii) establish an in-context learning on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new gaming scenario and related benchmark that dispatch a multi-agent collaboration efficiency and supervise multiple agents playing the game simultaneously. We conduct comprehensive evaluations with new auto-metric CoS for calculating the collaboration efficiency. Finally, our infrastructure can be deployed into real-world gaming scenarios in a customized VR version of CUISINEWORLD and adapted in existing broader Minecraft gaming domain. We hope our findings on LLMs and the new infrastructure for general-purpose scheduling and coordination can help shed light on how such skills can be obtained by learning from large language corpora.",d7d712e507c1c6273b05c773c825a668c5cf1504,Semantic Scholar,,, boosted prompt ensembles for large language models,"['Silviu Pitis', 'Michael Ruogu Zhang', 'Andrew Wang', 'Jimmy Ba']",http://arxiv.org/pdf/2304.05970,2023-04-12,,"Methods such as chain-of-thought prompting and self-consistency have pushed the frontier of language model reasoning performance with no additional training. To further improve performance, we propose a prompt ensembling method for large language models, which uses a small dataset to construct a set of few shot prompts that together comprise a ``boosted prompt ensemble''. The few shot examples for each prompt are chosen in a stepwise fashion to be ``hard'' examples on which the previous step's ensemble is uncertain. We show that this outperforms single-prompt output-space ensembles and bagged prompt-space ensembles on the GSM8k and AQuA datasets, among others. We propose both train-time and test-time versions of boosted prompting that use different levels of available annotation and conduct a detailed empirical study of our algorithm.",dca6c3927ade6481a1ae080f5c24decbfeced1be,Semantic Scholar,,, bootstrapping multilingual semantic parsers using large language models,"['Abhijeet Awasthi', 'Nitish Gupta', 'Bidisha Samanta', 'Shachi Dave', 'Sunita Sarawagi', 'P. Talukdar']",http://arxiv.org/pdf/2210.07313,2022-10-13,,"Despite cross-lingual generalization demonstrated by pre-trained multilingual models, the translate-train paradigm of transferring English datasets across multiple languages remains to be a key mechanism for training task-specific multilingual models. However, for many low-resource languages, the availability of a reliable translation service entails significant amounts of costly human-annotated translation pairs. Further, translation services may continue to be brittle due to domain mismatch between task-specific input text and general-purpose text used for training translation models. For multilingual semantic parsing, we demonstrate the effectiveness and flexibility offered by large language models (LLMs) for translating English datasets into several languages via few-shot prompting. Through extensive comparisons on two public datasets, MTOP and MASSIVE, spanning 50 languages and several domains, we show that our method of translating data using LLMs outperforms a strong translate-train baseline on 41 out of 50 languages. We study the key design choices that enable more effective multilingual data translation via prompted LLMs.",dda0f7f086fc875d583604f8b0cf4a8678bc4de4,Semantic Scholar,,, prompt2model generating deployable models from natural language instructions,"['Vijay Viswanathan', 'Chenyang Zhao', 'Amanda Bertsch', 'Tongshuang Sherry Wu', 'Graham Neubig']",https://arxiv.org/pdf/2308.12261,2023-08-23,,"Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pretrained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, Prompt2Model trains models that outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20% while being up to 700 times smaller. We also show that this data can be used to obtain reliable performance estimates of model performance, enabling model developers to assess model reliability before deployment. Prompt2Model is available open-source at https://github.com/neulab/prompt2model.",e69684fb06a7b1fe621d7ef0c97fc2ca0e122c43,Semantic Scholar,,, multilingual large language models are not (yet) codeswitchers,"['Ruochen Zhang', 'Samuel Cahyawijaya', 'Jan Christian Blaise Cruz', 'Alham Fikri Aji']",http://arxiv.org/pdf/2305.14235,2023-05-23,,"Multilingual Large Language Models (LLMs) have recently shown great capabilities in a wide range of tasks, exhibiting state-of-the-art performance through zero-shot or few-shot prompting methods. While there have been extensive studies on their abilities in monolingual tasks, the investigation of their potential in the context of code-switching (CSW), the practice of alternating languages within an utterance, remains relatively uncharted. In this paper, we provide a comprehensive empirical analysis of various multilingual LLMs, benchmarking their performance across four tasks: sentiment analysis, machine translation, summarization and word-level language identification. Our results indicate that despite multilingual LLMs exhibiting promising outcomes in certain tasks using zero or few-shot prompting, they still underperform in comparison to fine-tuned models of much smaller scales. We argue that current""multilingualism""in LLMs does not inherently imply proficiency with code-switching texts, calling for future research to bridge this discrepancy.",eda54452d8a8a412c2a985ef11572cb468906b1f,Semantic Scholar,,, product information extraction using chatgpt,"['Alexander Brinkmann', 'Roee Shraga', 'Reng Chiz Der', 'Christian Bizer']",http://arxiv.org/pdf/2306.14921,2023-06-23,,"Structured product data in the form of attribute/value pairs is the foundation of many e-commerce applications such as faceted product search, product comparison, and product recommendation. Product offers often only contain textual descriptions of the product attributes in the form of titles or free text. Hence, extracting attribute/value pairs from textual product descriptions is an essential enabler for e-commerce applications. In order to excel, state-of-the-art product information extraction methods require large quantities of task-specific training data. The methods also struggle with generalizing to out-of-distribution attributes and attribute values that were not a part of the training data. Due to being pre-trained on huge amounts of text as well as due to emergent effects resulting from the model size, Large Language Models like ChatGPT have the potential to address both of these shortcomings. This paper explores the potential of ChatGPT for extracting attribute/value pairs from product descriptions. We experiment with different zero-shot and few-shot prompt designs. Our results show that ChatGPT achieves a performance similar to a pre-trained language model but requires much smaller amounts of training data and computation for fine-tuning.",f00e7326baa9600e46b3a8e7077dc3a349f90a01,Semantic Scholar,,, large language models for user interest journeys,"['Konstantina Christakopoulou', 'Alberto Lalama', 'Cj Adams', 'Iris Qu', 'Yifat Amir', 'S. Chucri', 'Pierce Vollucci', 'Fabio Soldo', 'Dina Bseiso', 'Sarah Scodel', 'Lucas Dixon', 'Ed H. Chi', 'Minmin Chen']",http://arxiv.org/pdf/2305.15498,2023-05-24,,"Large language models (LLMs) have shown impressive capabilities in natural language understanding and generation. Their potential for deeper user understanding and improved personalized user experience on recommendation platforms is, however, largely untapped. This paper aims to address this gap. Recommender systems today capture users' interests through encoding their historical activities on the platforms. The generated user representations are hard to examine or interpret. On the other hand, if we were to ask people about interests they pursue in their life, they might talk about their hobbies, like I just started learning the ukulele, or their relaxation routines, e.g., I like to watch Saturday Night Live, or I want to plant a vertical garden. We argue, and demonstrate through extensive experiments, that LLMs as foundation models can reason through user activities, and describe their interests in nuanced and interesting ways, similar to how a human would. We define interest journeys as the persistent and overarching user interests, in other words, the non-transient ones. These are the interests that we believe will benefit most from the nuanced and personalized descriptions. We introduce a framework in which we first perform personalized extraction of interest journeys, and then summarize the extracted journeys via LLMs, using techniques like few-shot prompting, prompt-tuning and fine-tuning. Together, our results in prompting LLMs to name extracted user journeys in a large-scale industrial platform demonstrate great potential of these models in providing deeper, more interpretable, and controllable user understanding. We believe LLM powered user understanding can be a stepping stone to entirely new user experiences on recommendation platforms that are journey-aware, assistive, and enabling frictionless conversation down the line.",f834aed32f5531bfa426faab71878c549572500e,Semantic Scholar,,, promptbased extraction of social determinants of health using fewshot learning,"['Giridhar Kaushik Ramachandran', 'Yujuan Fu', 'Bin Han', 'K. Lybarger', 'Nicholas J. Dobbins', 'Ozlem Uzuner', 'M. Yetisgen']",http://arxiv.org/pdf/2306.07170,2023-06-12,,"Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7th best-performing system among all teams in the n2c2 challenge with SHAC.",386bd4d25043516f076ea7b2296a1ebec84f43ce,Semantic Scholar,,, deplot oneshot visual language reasoning by plottotable translation,"['Fangyu Liu', 'Julian Martin Eisenschlos', 'Francesco Piccinno', 'Syrine Krichene', 'Chenxi Pang', 'Kenton Lee', 'Mandar Joshi', 'Wenhu Chen', 'Nigel Collier', 'Y. Altun']",http://arxiv.org/pdf/2212.10505,2022-12-20,,"Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than>28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.",4d3a49d1439a0b8fbb0e9f588970ad0f1d70dec8,Semantic Scholar,,, short answer grading using oneshot prompting and text similarity scoring model,['Su-Youn Yoon'],http://arxiv.org/pdf/2305.18638,2023-05-29,,"In this study, we developed an automated short answer grading (ASAG) model that provided both analytic scores and final holistic scores. Short answer items typically consist of multiple sub-questions, and providing an analytic score and the text span relevant to each sub-question can increase the interpretability of the automated scores. Furthermore, they can be used to generate actionable feedback for students. Despite these advantages, most studies have focused on predicting only holistic scores due to the difficulty in constructing dataset with manual annotations. To address this difficulty, we used large language model (LLM)-based one-shot prompting and a text similarity scoring model with domain adaptation using small manually annotated dataset. The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a subset of the publicly available ASAG dataset. The model achieved a substantial improvement over the majority baseline.",d1aa858644154af50e36860e6761ae52ae655bd3,Semantic Scholar,,, utilizing language models for energy load forecasting,"['Hao Xue', 'Flora D. Salim']",https://arxiv.org/pdf/2310.17788,2023-10-26,,"Energy load forecasting plays a crucial role in optimizing resource allocation and managing energy consumption in buildings and cities. In this paper, we propose a novel approach that leverages language models for energy load forecasting. We employ prompting techniques to convert energy consumption data into descriptive sentences, enabling fine-tuning of language models. By adopting an autoregressive generating approach, our proposed method enables predictions of various horizons of future energy load consumption. Through extensive experiments on real-world datasets, we demonstrate the effectiveness and accuracy of our proposed method. Our results indicate that utilizing language models for energy load forecasting holds promise for enhancing energy efficiency and facilitating intelligent decision-making in energy systems.",00c2aea466034c563b7aa3cd8eadb1fc46b119fa,Semantic Scholar,,, s3dst structured opendomain dialogue segmentation and state tracking in the era of llms,"['Sarkar Snigdha Sarathi Das', 'C. Shah', 'Mengting Wan', 'Jennifer Neville', 'Longfei Yang', 'Reid Andersen', 'Georg Buscher', 'Tara Safavi']",https://arxiv.org/pdf/2309.08827,2023-09-16,,"The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations. While sufficient for task-oriented dialogue systems supporting narrow domain applications, the advent of Large Language Model (LLM)-based chat systems has introduced many real-world intricacies in open-domain dialogues. These intricacies manifest in the form of increased complexity in contextual interactions, extended dialogue sessions encompassing a diverse array of topics, and more frequent contextual shifts. To handle these intricacies arising from evolving LLM-based chat systems, we propose joint dialogue segmentation and state tracking per segment in open-domain dialogue systems. Assuming a zero-shot setting appropriate to a true open-domain dialogue system, we propose S3-DST, a structured prompting technique that harnesses Pre-Analytical Recollection, a novel grounding mechanism we designed for improving long context tracking. To demonstrate the efficacy of our proposed approach in joint segmentation and state tracking, we evaluate S3-DST on a proprietary anonymized open-domain dialogue dataset, as well as publicly available DST and segmentation datasets. Across all datasets and settings, S3-DST consistently outperforms the state-of-the-art, demonstrating its potency and robustness the next generation of LLM-based chat systems.",034f1d77d832460a239072c81b5bb178b93c1e9f,Semantic Scholar,,, take a step back evoking reasoning via abstraction in large language models,"['Huaixiu Steven Zheng', 'Swaroop Mishra', 'Xinyun Chen', 'Heng-Tze Cheng', 'E. Chi', 'Quoc V. Le', 'Denny Zhou']",https://arxiv.org/pdf/2310.06117,2023-10-09,,"We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2L models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, TimeQA by 27%, and MuSiQue by 7%.",0786c88990235414611478099e43611542d973b0,Semantic Scholar,,, chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation,['Brandon Harwood'],http://arxiv.org/pdf/2305.03852,2023-05-05,,"This paper explores the potential for utilizing generative AI models in group-focused co-creative frameworks to enhance problem solving and ideation in business innovation and co-creation contexts, and proposes a novel prompting technique for conversational generative AI agents which employ methods inspired by traditional 'human-to-human' facilitation and instruction to enable active contribution to Design Thinking, a co-creative framework. Through experiments using this prompting technique, we gather evidence that conversational generative transformers (i.e. ChatGPT) have the capability to contribute context-specific, useful, and creative input into Design Thinking activities. We also discuss the potential benefits, limitations, and risks associated with using generative AI models in co-creative ideation and provide recommendations for future research.",0820a7ec1b7cac3470836161a92da7d59f626d14,Semantic Scholar,,, image to tree with recursive prompting,"['James Batten', 'Matthew Sinclair', 'Ben Glocker', 'M. Schaap']",http://arxiv.org/pdf/2301.00447,2023-01-01,,". Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.",118802f91718ea2c566f2eaf1b4e25c439459f4d,Semantic Scholar,,, spoken language intelligence of large language models for language learning,"['Linkai Peng', 'Baorian Nuchged', 'Yingming Gao']",https://arxiv.org/pdf/2308.14536,2023-08-28,,"People have long hoped for a conversational system that can assist in real-life situations, and recent progress on large language models (LLMs) is bringing this idea closer to reality. While LLMs are often impressive in performance, their efficacy in real-world scenarios that demand expert knowledge remains unclear. LLMs are believed to hold the most potential and value in education, especially in the development of Artificial intelligence (AI) based virtual teachers capable of facilitating language learning. Our focus is centered on evaluating the efficacy of LLMs in the realm of education, specifically in the areas of spoken language learning which encompass phonetics, phonology, and second language acquisition. We introduce a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios, including understanding and application of spoken language knowledge. In addition, we investigate the influence of various prompting techniques such as zero- and few-shot method (prepending the question with question-answer exemplars), chain-of-thought (CoT, think step-by-step), in-domain exampler and external tools (Google, Wikipedia). We conducted large-scale evaluation on popular LLMs (20 distinct models) using these methods. We achieved significant performance improvements compared to the zero-shot baseline in the practical questions reasoning (GPT-3.5, 49.1% ->63.1%; LLaMA2-70B-Chat, 42.2% ->48.6%). We found that models of different sizes have good understanding of concepts in phonetics, phonology, and second language acquisition, but show limitations in reasoning for real-world problems. Additionally, we also explore preliminary findings on conversational communication.",19b43ff57e5d8f8a99da4110fbc30b4ecc39a527,Semantic Scholar,,, scalable multirobot collaboration with large language models centralized or decentralized systems,"['Yongchao Chen', 'Jacob Arkin', 'Yang Zhang', 'Nicholas Roy', 'Chuchu Fan']",https://arxiv.org/pdf/2309.15943,2023-09-27,,"A flurry of recent work has demonstrated that pre-trained large language models (LLMs) can be effective task planners for a variety of single-robot tasks. The planning performance of LLMs is significantly improved via prompting techniques, such as in-context learning or re-prompting with state feedback, placing new importance on the token budget for the context window. An under-explored but natural next direction is to investigate LLMs as multi-robot task planners. However, long-horizon, heterogeneous multi-robot planning introduces new challenges of coordination while also pushing up against the limits of context window length. It is therefore critical to find token-efficient LLM planning frameworks that are also able to reason about the complexities of multi-robot coordination. In this work, we compare the task success rate and token efficiency of four multi-agent communication frameworks (centralized, decentralized, and two hybrid) as applied to four coordination-dependent multi-agent 2D task scenarios for increasing numbers of agents. We find that a hybrid framework achieves better task success rates across all four tasks and scales better to more agents. We further demonstrate the hybrid frameworks in 3D simulations where the vision-to-text problem and dynamical errors are considered. See our project website https://yongchao98.github.io/MIT-REALM-Multi-Robot/ for prompts, videos, and code.",1ad735714ad2e4ee5b94ce26c976e5ee5c7cde3b,Semantic Scholar,,, the utility of large language models and generative ai for education research,"['Andrew Katz', 'Umair Shakir', 'B. Chambers']",http://arxiv.org/pdf/2305.18125,2023-05-29,,"The use of natural language processing (NLP) techniques in engineering education can provide valuable insights into the underlying processes involved in generating text. While accessing these insights can be labor-intensive if done manually, recent advances in NLP and large language models have made it a realistic option for individuals. This study explores and evaluates a combination of clustering, summarization, and prompting techniques to analyze over 1,000 student essays in which students discussed their career interests. The specific assignment prompted students to define and explain their career goals as engineers. Using text embedding representations of student responses, we clustered the responses together to identify thematically similar statements from students. The clustered responses were then summarized to quickly identify career interest themes. We also used a set of a priori codes about career satisfaction and sectors to demonstrate an alternative approach to using these generative text models to analyze student writing. The results of this study demonstrate the feasibility and usefulness of NLP techniques in engineering education research. By automating the initial analysis of student essays, researchers and educators can more efficiently and accurately identify key themes and patterns in student writing. The methods presented in this paper have broader applications for engineering education and research purposes beyond analyzing student essays. By explaining these methods to the engineering education community, readers can utilize them in their own contexts.",1fc0e5b30bfede1b78389d00f8c41bacd29ecd7f,Semantic Scholar,,, foundation metrics quantifying effectiveness of healthcare conversations powered by generative ai,"['Mahyar Abbasian', 'Elahe Khatibi', 'Iman Azimi', 'David Oniani', 'Zahra Shakeri Hossein Abad', 'Alexander Thieme', 'Zhongqi Yang', 'Yanshan Wang', 'Bryant Lin', 'Olivier Gevaert', 'Li-Jia Li', 'Ramesh Jain', 'Amir M. Rahmani']",https://arxiv.org/pdf/2309.12444,2023-09-21,,"Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present an comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process.",20cb4e0bd8871d33d82fc72ea82a0aa1dd922810,Semantic Scholar,,, an empirical study on the robustness of the segment anything model (sam),"['Yuqing Wang', 'Yun Zhao', 'Linda Petzold']",http://arxiv.org/pdf/2305.06422,2023-05-10,,"The Segment Anything Model (SAM) is a foundation model for general image segmentation. Although it exhibits impressive performance predominantly on natural images, understanding its robustness against various image perturbations and domains is critical for real-world applications where such challenges frequently arise. In this study we conduct a comprehensive robustness investigation of SAM under diverse real-world conditions. Our experiments encompass a wide range of image perturbations. Our experimental results demonstrate that SAM's performance generally declines under perturbed images, with varying degrees of vulnerability across different perturbations. By customizing prompting techniques and leveraging domain knowledge based on the unique characteristics of each dataset, the model's resilience to these perturbations can be enhanced, addressing dataset-specific challenges. This work sheds light on the limitations and strengths of SAM in real-world applications, promoting the development of more robust and versatile image segmentation solutions.",26d31d641116b656826737335b2accb802ac9931,Semantic Scholar,,, boosting lowdata instance segmentation by unsupervised pretraining with saliency prompt,"['Hao Li', 'Dingwen Zhang', 'Nian Liu', 'Lechao Cheng', 'Yalun Dai', 'Chaoxi Zhang', 'Xinggang Wang', 'Junwei Han']",https://arxiv.org/pdf/2302.01171,2023-02-02,,"Inspired by DETR variants, query-based end-to-end instance segmentation (QEIS) methods have recently outperformed CNN-based models on large-scale datasets. Yet they would lose efficacy when only a small amount of training data is available since it's hard for the crucial queries/kernels to learn localization and shape priors. To this end, this work offers a novel unsupervised pre-training solution for low-data regimes. Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models by giving Saliency Prompt for queries/kernels. Our method contains three parts: 1) Saliency Masks Proposal is responsible for generating pseudo masks from unlabeled images based on the saliency mechanism. 2) Prompt-Kernel Matching transfers pseudo masks into prompts and injects the corresponding localization and shape priors to the best-matched kernels. 3) Kernel Supervision is applied to supply supervision at the kernel level for robust learning. From a practical perspective, our pre-training method helps QEIS models achieve a similar convergence speed and comparable performance with CNN-based models in low-data regimes. Experimental results show that our method significantly boosts several QEIS models on three datasets.11Code: https://github.com/lifuguan/saliency.prompt",29965a1efc21a637e03a5e0a869d77eca77f5085,Semantic Scholar,,, scigraphqa a largescale synthetic multiturn questionanswering dataset for scientific graphs,"['Sheng Li', 'Nima Tajbakhsh']",https://arxiv.org/pdf/2308.03349,2023-08-07,,"In this work, we present SciGraphQA, a synthetic multi-turn question-answer dataset related to academic graphs. SciGraphQA is 13 times larger than ChartVQA, the previously largest chart-visual question-answering dataset. It is also the largest open-sourced chart VQA dataset with non-synthetic charts. To build our dataset, we selected 290,000 Computer Science or Machine Learning ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate 295K samples of open-vocabulary multi-turn question-answering dialogues about the graphs. As context, we provided the text-only Palm-2 with paper title, abstract, paragraph mentioning the graph, and rich text contextual data from the graph itself, obtaining dialogues with an average 2.23 question-answer turns for each graph. We asked GPT-4 to assess the matching quality of our question-answer turns given the paper's context, obtaining an average rating of 8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our dataset, finding LLaVA-13B being the most performant with a CIDEr score of 0.08. We further enriched the question prompts for LLAVA by including the serialized data tables extracted from the graphs using the DePlot model, boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset, we also fine-tuned LLaVa using our dataset, reaching a substantially higher CIDEr score of 0.26. We anticipate further accuracy improvement by including segmentation mask tokens and leveraging larger LLM backbones coupled with emergent prompting techniques. Our code and data are open-sourced.",2bd1b8990db73b6495c11082bea2d5f925c5226f,Semantic Scholar,,, oneshot labeling for automatic relevance estimation,"['Sean MacAvaney', 'Luca Soldaini']",https://arxiv.org/pdf/2302.11266,2023-02-22,,"Dealing with unjudged documents (""holes"") in relevance assessments is a perennial problem when evaluating search systems with offline experiments. Holes can reduce the apparent effectiveness of retrieval systems during evaluation and introduce biases in models trained with incomplete data. In this work, we explore whether large language models can help us fill such holes to improve offline evaluations. We examine an extreme, albeit common, evaluation setting wherein only a single known relevant document per query is available for evaluation. We then explore various approaches for predicting the relevance of unjudged documents with respect to a query and the known relevant document, including nearest neighbor, supervised, and prompting techniques. We find that although the predictions of these One-Shot Labelers (1SL) frequently disagree with human assessments, the labels they produce yield a far more reliable ranking of systems than the single labels do alone. Specifically, the strongest approaches can consistently reach system ranking correlations of over 0.86 with the full rankings over a variety of measures. Meanwhile, the approach substantially increases the reliability of t-tests due to filling holes in relevance assessments, giving researchers more confidence in results they find to be significant. Alongside this work, we release an easy-to-use software package to enable the use of 1SL for evaluation of other ad-hoc collections or systems.",352bcafbcc95a84d96019688955cab5c43eb23f0,Semantic Scholar,,, large language models can be easily distracted by irrelevant context,"['Freda Shi', 'Xinyun Chen', 'Kanishka Misra', 'Nathan Scales', 'David Dohan', 'E. Chi', 'Nathanael Scharli', 'Denny Zhou']",http://arxiv.org/pdf/2302.00093,2023-01-31,,"Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.",3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e,Semantic Scholar,,, are emergent abilities in large language models just incontext learning,"['Sheng Lu', 'Irina Bigoulaeva', 'Rachneet Sachdeva', 'Harish Tayyar Madabushi', 'Iryna Gurevych']",https://arxiv.org/pdf/2309.01809,2023-09-04,,"Large language models have exhibited emergent abilities, demonstrating exceptional performance across diverse tasks for which they were not explicitly trained, including those that require complex reasoning abilities. The emergence of such abilities carries profound implications for the future direction of research in NLP, especially as the deployment of such models becomes more prevalent. However, one key challenge is that the evaluation of these abilities is often confounded by competencies that arise in models through alternative prompting techniques, such as in-context learning and instruction following, which also emerge as the models are scaled up. In this study, we provide the first comprehensive examination of these emergent abilities while accounting for various potentially biasing factors that can influence the evaluation of models. We conduct rigorous tests on a set of 18 models, encompassing a parameter range from 60 million to 175 billion parameters, across a comprehensive set of 22 tasks. Through an extensive series of over 1,000 experiments, we provide compelling evidence that emergent abilities can primarily be ascribed to in-context learning. We find no evidence for the emergence of reasoning abilities, thus providing valuable insights into the underlying mechanisms driving the observed abilities and thus alleviating safety concerns regarding their use.",3e4afde5a9de2c1801da99b8aff5ae05923f256b,Semantic Scholar,,, are large language models ready for healthcare a comparative study on clinical language understanding,"['Yuqing Wang', 'Yun Zhao', 'Linda Petzold']",https://arxiv.org/pdf/2304.05368,2023-04-09,,"Large language models (LLMs) have made significant progress in various domains, including healthcare. However, the specialized nature of clinical language understanding tasks presents unique challenges and limitations that warrant further investigation. In this study, we conduct a comprehensive evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within the realm of clinical language understanding tasks. These tasks span a diverse range, including named entity recognition, relation extraction, natural language inference, semantic textual similarity, document classification, and question-answering. We also introduce a novel prompting strategy, self-questioning prompting (SQP), tailored to enhance LLMs' performance by eliciting informative questions and answers pertinent to the clinical scenarios at hand. Our evaluation underscores the significance of task-specific learning strategies and prompting techniques for improving LLMs' effectiveness in healthcare-related tasks. Additionally, our in-depth error analysis on the challenging relation extraction task offers valuable insights into error distribution and potential avenues for improvement using SQP. Our study sheds light on the practical implications of employing LLMs in the specialized domain of healthcare, serving as a foundation for future research and the development of potential applications in healthcare settings.",42780f9c7f73d73d7a887e2f787af0e079703d40,Semantic Scholar,,, leveraging large language models to generate answer set programs,"['Adam Ishay', 'Zhun Yang', 'Joohyung Lee']",https://arxiv.org/pdf/2307.07699,2023-07-15,,"Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated exceptional performance in various natural language processing tasks and have shown the ability to solve certain reasoning problems. However, their reasoning capabilities are limited and relatively shallow, despite the application of various prompting techniques. In contrast, formal logic is adept at handling complex reasoning, but translating natural language descriptions into formal logic is a challenging task that non-experts struggle with. This paper proposes a neuro-symbolic method that combines the strengths of large language models and answer set programming. Specifically, we employ an LLM to transform natural language descriptions of logic puzzles into answer set programs. We carefully design prompts for an LLM to convert natural language descriptions into answer set programs in a step by step manner. Surprisingly, with just a few in-context learning examples, LLMs can generate reasonably complex answer set programs. The majority of errors made are relatively simple and can be easily corrected by humans, thus enabling LLMs to effectively assist in the creation of answer set programs.",4a6d7b11c4aba5a23f68856989366dd4311e960b,Semantic Scholar,,, extracting multivalued relations from language models,"['Sneha Singhania', 'S. Razniewski', 'G. Weikum']",https://aclanthology.org/2023.repl4nlp-1.12.pdf,2023-07-06,,"The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5% F1 score. Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task, and pave the way for further research on extracting relational knowledge from latent language representations.",4b99e8273227fd05f2be20248050d81e97ab4f4e,Semantic Scholar,,, teaching algorithmic reasoning via incontext learning,"['Hattie Zhou', 'Azade Nova', 'H. Larochelle', 'Aaron C. Courville', 'Behnam Neyshabur', 'Hanie Sedghi']",http://arxiv.org/pdf/2211.09066,2022-11-15,,"Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines.",4d17732d90440682b0500f4e209c6cc4fac20e0e,Semantic Scholar,,, understanding and improving visual prompting a labelmapping perspective,"['Aochuan Chen', 'Yuguang Yao', 'Pin-Yu Chen', 'Yihua Zhang', 'Sijia Liu']",https://arxiv.org/pdf/2211.11635,2022-11-21,,"We revisit and advance visual prompting (VP), an input prompting technique for vision tasks. VP can reprogram a fixed, pre-trained source model to accomplish downstream tasks in the target domain by simply incorporating universal prompts (in terms of input perturbation patterns) into downstream data points. Yet, it remains elusive why VP stays effective even given a ruleless label mapping (LM) between the source classes and the target classes. Inspired by the above, we ask: How is LM interrelated with VP? And how to exploit such a relationship to improve its accuracy on target tasks? We peer into the influence of LM on VP and provide an affirmative answer that a better ‘quality’ of LM (assessed by mapping precision and explanation) can consistently improve the effectiveness of VP. This is in contrast to the prior art where the factor of LM was missing. To optimize LM, we propose a new VP framework, termed ILM-VP (iterative label mapping-based visual prompting), which automatically re-maps the source labels to the target labels and progressively improves the target task accuracy of VP. Further, when using a contrastive language-image pretrained (CLIP) model for VP, we propose to integrate an LM process to assist the text prompt selection of CLIP and to improve the target task accuracy. Extensive experiments demonstrate that our proposal significantly outperforms state-of-the-art VP methods. As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, ILM-VP outperforms baselines by a substantial margin, e.g., 7.9% and 6.7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets. Besides, our proposal on CLIP-based VP provides 13.7% and 7.1% accuracy improvements on Flowers102 and DTD respectively. Code is available at https://github.com/OPTML-Group/ILM-VP.",4edd2d2770729380eda23826af1b78298b334a23,Semantic Scholar,,, adaptivesolver framework for dynamic strategy selection in large language model reasoning,"['Jianpeng Zhou', 'Wanjun Zhong', 'Yanlin Wang', 'Jiahai Wang']",https://arxiv.org/pdf/2310.01446,2023-10-01,,"Large Language Models (LLMs) are showcasing impressive ability in handling complex reasoning tasks. In real-world situations, problems often span a spectrum of complexities. Humans inherently adjust their problem-solving approaches based on task complexity. However, most methodologies that leverage LLMs tend to adopt a uniform approach: utilizing consistent models, prompting methods, and degrees of problem decomposition, regardless of the problem complexity. Inflexibility of them can bring unnecessary computational overhead or sub-optimal performance. To address this problem, we introduce an Adaptive-Solver framework. It strategically modulates solving strategies based on the difficulties of the problems. Given an initial solution, the framework functions with two primary modules. The initial evaluation module assesses the adequacy of the current solution. If improvements are needed, the subsequent adaptation module comes into play. Within this module, three key adaptation strategies are employed: (1) Model Adaptation: Switching to a stronger LLM when a weaker variant is inadequate. (2) Prompting Method Adaptation: Alternating between different prompting techniques to suit the problem's nuances. (3) Decomposition Granularity Adaptation: Breaking down a complex problem into more fine-grained sub-questions to enhance solvability. Through such dynamic adaptations, our framework not only enhances computational efficiency but also elevates the overall performance. This dual-benefit ensures both the efficiency of the system for simpler tasks and the precision required for more complex questions. Experimental results from complex reasoning tasks reveal that the prompting method adaptation and decomposition granularity adaptation enhance performance across all tasks. Furthermore, the model adaptation approach significantly reduces API costs (up to 50%) while maintaining superior performance.",5076bbbf831a92174c9cc1b347bd0584560435fc,Semantic Scholar,,, generative speech recognition error correction with large language models and taskactivating prompting,"['Chao-Han Huck Yang', 'Yile Gu', 'Yi-Chieh Liu', 'Shalini Ghosh', 'I. Bulyko', 'A. Stolcke']",https://arxiv.org/pdf/2309.15649,2023-09-27,,"We explore the ability of large language models (LLMs) to act as speech recognition post-processors that perform rescoring and error correction. Our first focus is on instruction prompting to let LLMs perform these task without fine-tuning, for which we evaluate different prompting schemes, both zeroand few-shot in-context learning, and a novel “task activation“ prompting method that combines causal instructions and demonstration to increase its context windows. Next, we show that rescoring only by in-context learning with frozen LLMs achieves results that are competitive with rescoring by domain-tuned LMs, using a pretrained first-pass recognition system and rescoring output on two out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with fine-tuning we achieve error rates below the N-best oracle level, showcasing the generalization power of the LLMs.",50e8ab900d2ca4d83da120bbfe5338ee93dbe741,Semantic Scholar,,, multiprompt with depth partitioned crossmodal learning,"['Yiqi Wang', 'Xianda Guo', 'Zheng Hua Zhu', 'Yingjie Tian']",https://arxiv.org/pdf/2305.06221,2023-05-10,,"In recent years, soft prompt learning methods have been proposed to fine-tune large-scale vision-language pre-trained models for various downstream tasks. These methods typically combine learnable textual tokens with class tokens as input for models with frozen parameters. However, they often employ a single prompt to describe class contexts, failing to capture categories' diverse attributes adequately. This study introduces the Partitioned Multi-modal Prompt (PMPO), a multi-modal prompting technique that extends the soft prompt from a single learnable prompt to multiple prompts. Our method divides the visual encoder depths and connects learnable prompts to the separated visual depths, enabling different prompts to capture the hierarchical contextual depths of visual representations. Furthermore, to maximize the advantages of multi-prompt learning, we incorporate prior information from manually designed templates and learnable multi-prompts, thus improving the generalization capabilities of our approach. We evaluate the effectiveness of our approach on three challenging tasks: new class generalization, cross-dataset evaluation, and domain generalization. For instance, our method achieves a $79.28$ harmonic mean, averaged over 11 diverse image recognition datasets ($+7.62$ compared to CoOp), demonstrating significant competitiveness compared to state-of-the-art prompting methods.",511ad6b37cb028bdfbd6096e6d20aa4b8b34fafc,Semantic Scholar,,, large language models are pretty good zeroshot video game bug detectors,"['Mohammad Reza Taesiri', 'Finlay Macklon', 'Yihe Wang', 'Hengshuo Shen', 'C. Bezemer']",http://arxiv.org/pdf/2210.02506,2022-10-05,,"Video game testing requires game-specific knowledge as well as common sense reasoning about the events in the game. While AI-driven agents can satisfy the first requirement, it is not yet possible to meet the second requirement automatically. Therefore, video game testing often still relies on manual testing, and human testers are required to play the game thoroughly to detect bugs. As a result, it is challenging to fully automate game testing. In this study, we explore the possibility of leveraging the zero-shot capabilities of large language models for video game bug detection. By formulating the bug detection problem as a question-answering task, we show that large language models can identify which event is buggy in a sequence of textual descriptions of events from a game. To this end, we introduce the GameBugDescriptions benchmark dataset, which consists of 167 buggy gameplay videos and a total of 334 question-answer pairs across 8 games. We extensively evaluate the performance of six models across the OPT and InstructGPT large language model families on our benchmark dataset. Our results show promising results for employing language models to detect video game bugs. With the proper prompting technique, we could achieve an accuracy of 70.66%, and on some video games, up to 78.94%. Our code, evaluation data and the benchmark can be found on https://asgaardlab.github.io/LLMxBugs",55e3fe05598be7c3dd357d51166869f6571b824f,Semantic Scholar,,, help me think a simple prompting strategy for nonexperts to create customized content with models,"['Swaroop Mishra', 'E. Nouri']",http://arxiv.org/pdf/2208.08232,2022-08-17,,"Controlling the text generated by language models and customizing the content has been a long-standing challenge. Existing prompting techniques proposed in pursuit of providing control are task-specific and lack generality; this provides overwhelming choices for non-expert users to find a suitable method for their task. The effort associated with those techniques, such as in writing examples, explanations, instructions, etc. further limits their adoption among non-expert users. In this paper, we propose a simple prompting strategy HELP ME THINK where we encourage GPT3 to help non-expert users by asking a set of relevant questions and leveraging user answers to execute the task. We demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks. Specifically, we focus on tasks that are hard for average humans and require significant thinking to perform. We hope our work will encourage the development of unconventional ways to harness the power of large language models.",5ba1e498665d2b3536cb436f0cf484dce03459fe,Semantic Scholar,,, leveraging fewshot data augmentation and waterfall prompting for response generation,"['Lea Krause', ""Selene B'aez Santamar'ia"", 'Michiel van der Meer', 'Urja Khurana']",https://arxiv.org/pdf/2308.01080,2023-08-02,,"This paper discusses our approaches for task-oriented conversational modelling using subjective knowledge, with a particular emphasis on response generation. Our methodology was shaped by an extensive data analysis that evaluated key factors such as response length, sentiment, and dialogue acts present in the provided dataset. We used few-shot learning to augment the data with newly generated subjective knowledge items and present three approaches for DSTC11: (1) task-specific model exploration, (2) incorporation of the most frequent question into all generated responses, and (3) a waterfall prompting technique using a combination of both GPT-3 and ChatGPT.",657e364ec6932558f426583dc31953e547bf6575,Semantic Scholar,,, the formai dataset generative ai in software security through the lens of formal verification,"['Norbert Tihanyi', 'Tamás Bisztray', 'Ridhi Jain', 'M. Ferrag', 'L. Cordeiro', 'Vasileios Mavroeidis']",https://arxiv.org/pdf/2307.02192,2023-07-05,,"This paper presents the FormAI dataset, a large collection of 112,000 AI-generated compilable and independent C programs with vulnerability classification. We introduce a dynamic zero-shot prompting technique constructed to spawn diverse programs utilizing Large Language Models (LLMs). The dataset is generated by GPT-3.5-turbo and comprises programs with varying levels of complexity. Some programs handle complicated tasks like network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name. This is accomplished by employing a formal verification method using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model checking, abstract interpretation, constraint programming, and satisfiability modulo theories to reason over safety/security properties in programs. This approach definitively detects vulnerabilities and offers a formal model known as a counterexample, thus eliminating the possibility of generating false positive reports. We have associated the identified vulnerabilities with Common Weakness Enumeration (CWE) numbers. We make the source code available for the 112,000 programs, accompanied by a separate file containing the vulnerabilities detected in each program, making the dataset ideal for training LLMs and machine learning algorithms. Our study unveiled that according to ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities, thereby presenting considerable risks to software safety and security.",67455478e77c8672d0dd08f89735a8813bbfec65,Semantic Scholar,,, fixing rust compilation errors using llms,"['Pantazis Deligiannis', 'A. Lal', 'Nikita Mehrotra', 'Aseem Rastogi']",https://arxiv.org/pdf/2308.05177,2023-08-09,,"The Rust programming language, with its safety guarantees, has established itself as a viable choice for low-level systems programming language over the traditional, unsafe alternatives like C/C++. These guarantees come from a strong ownership-based type system, as well as primitive support for features like closures, pattern matching, etc., that make the code more concise and amenable to reasoning. These unique Rust features also pose a steep learning curve for programmers. This paper presents a tool called RustAssistant that leverages the emergent capabilities of Large Language Models (LLMs) to automatically suggest fixes for Rust compilation errors. RustAssistant uses a careful combination of prompting techniques as well as iteration with an LLM to deliver high accuracy of fixes. RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on real-world compilation errors in popular open-source Rust repositories. We plan to release our dataset of Rust compilation errors to enable further research.",674c5ec7b144aea1f6b143baeb17cc839f52416e,Semantic Scholar,,, synthetic prompting generating chainofthought demonstrations for large language models,"['Zhihong Shao', 'Yeyun Gong', 'Yelong Shen', 'Minlie Huang', 'Nan Duan', 'Weizhu Chen']",http://arxiv.org/pdf/2302.00618,2023-02-01,,"Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.",69619a2a47faee7a29ec596db13172e2a42ff921,Semantic Scholar,,, unsupervised contrastconsistent ranking with language models,"['Niklas Stoehr', 'Pengxiang Cheng', 'Jing Wang', 'Daniel Preotiuc-Pietro', 'Rajarshi Bhowmik']",https://arxiv.org/pdf/2309.06991,2023-09-13,,"Language models contain ranking-based knowledge and are powerful solvers of in-context ranking tasks. For instance, they may have parametric knowledge about the ordering of countries by size or may be able to rank product reviews by sentiment. We compare pairwise, pointwise and listwise prompting techniques to elicit a language model's ranking knowledge. However, we find that even with careful calibration and constrained decoding, prompting-based techniques may not always be self-consistent in the rankings they produce. This motivates us to explore an alternative approach that is inspired by an unsupervised probing method called Contrast-Consistent Search (CCS). The idea is to train a probe guided by a logical constraint: a language model's representation of a statement and its negation must be mapped to contrastive true-false poles consistently across multiple statements. We hypothesize that similar constraints apply to ranking tasks where all items are related via consistent, pairwise or listwise comparisons. To this end, we extend the binary CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking methods such as the Max-Margin Loss, Triplet Loss and an Ordinal Regression objective. Across different models and datasets, our results confirm that CCR probing performs better or, at least, on a par with prompting.",70b73e272621562c6261f86d2ebf814703b760ed,Semantic Scholar,,, unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations,"['Tiziano Labruna', 'Sofia Brenna', 'Andrea Zaninello', 'B. Magnini']",http://arxiv.org/pdf/2305.14556,2023-05-23,,"Large pre-trained language models have exhibited unprecedented capabilities in producing high-quality text via prompting techniques. This fact introduces new possibilities for data collection and annotation, particularly in situations where such data is scarce, complex to gather, expensive, or even sensitive. In this paper, we explore the potential of these models to generate and annotate goal-oriented dialogues, and conduct an in-depth analysis to evaluate their quality. Our experiments employ ChatGPT, and encompass three categories of goal-oriented dialogues (task-oriented, collaborative, and explanatory), two generation modes (interactive and one-shot), and two languages (English and Italian). Based on extensive human-based evaluations, we demonstrate that the quality of generated dialogues and annotations is on par with those generated by humans.",7307ee3c819c34b7c93ccbbd330a4c889956b36f,Semantic Scholar,,, events realm event reasoning of entity states via language models,"['Evangelia Spiliopoulou', 'Artidoro Pagnoni', 'Yonatan Bisk', 'E. Hovy']",https://arxiv.org/pdf/2211.05392,2022-11-10,,"This paper investigates models of event implications. Specifically, how well models predict entity state-changes, by targeting their understanding of physical attributes. Nominally, Large Language models (LLM) have been exposed to procedural knowledge about how objects interact, yet our benchmarking shows they fail to reason about the world. Conversely, we also demonstrate that existing approaches often misrepresent the surprising abilities of LLMs via improper task encodings and that proper model prompting can dramatically improve performance of reported baseline results across multiple tasks. In particular, our results indicate that our prompting technique is especially useful for unseen attributes (out-of-domain) or when only limited data is available.",748a2700ec11f51560a69ec05c67ca9f97014be7,Semantic Scholar,,, fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems,"['Aniruddha Deb', 'Neeva Oza', 'Sarthak Singla', 'Dinesh Khandelwal', 'Dinesh Garg', 'Parag Singla']",https://arxiv.org/pdf/2310.01991,2023-10-03,,"While forward reasoning (i.e. find the answer given the question) has been explored extensively in the recent literature, backward reasoning is relatively unexplored. We examine the backward reasoning capabilities of LLMs on Math Word Problems (MWPs): given a mathematical question and its answer, with some details omitted from the question, can LLMs effectively retrieve the missing information? In this paper, we formally define the backward reasoning task on math word problems and modify three datasets to evaluate this task: GSM8k, SVAMP and MultiArith. Our findings show a significant drop in the accuracy of models on backward reasoning compared to forward reasoning across four SOTA LLMs (GPT4, GPT3.5, PaLM-2, and LLaMa-2). Utilizing the specific format of this task, we propose three novel techniques that improve performance: Rephrase reformulates the given problem into a forward reasoning problem, PAL-Tools combines the idea of Program-Aided LLMs to produce a set of equations that can be solved by an external solver, and Check your Work exploits the availability of natural verifier of high accuracy in the forward direction, interleaving solving and verification steps. Finally, realizing that each of our base methods correctly solves a different set of problems, we propose a novel Bayesian formulation for creating an ensemble over these base methods aided by a verifier to further boost the accuracy by a significant margin. Extensive experimentation demonstrates that our techniques successively improve the performance of LLMs on the backward reasoning task, with the final ensemble-based method resulting in a substantial performance gain compared to the raw LLMs with standard prompting techniques such as chain-of-thought.",8db1dcae055842f43ccac04182957b20d15bbe6b,Semantic Scholar,,, investigating prompting techniques for zero and fewshot visual question answering,"['Rabiul Awal', 'Le Zhang', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2306.09996,2023-06-16,,"In this paper, we explore effective prompting techniques to enhance zero- and few-shot Visual Question Answering (VQA) performance in contemporary Vision-Language Models (VLMs). Central to our investigation is the role of question templates in guiding VLMs to generate accurate answers. We identify that specific templates significantly influence VQA outcomes, underscoring the need for strategic template selection. Another pivotal aspect of our study is augmenting VLMs with image captions, providing them with additional visual cues alongside direct image features in VQA tasks. Surprisingly, this augmentation significantly improves the VLMs' performance in many cases, even though VLMs""see""the image directly! We explore chain-of-thought (CoT) reasoning and find that while standard CoT reasoning causes drops in performance, advanced methods like self-consistency can help recover it. Furthermore, we find that text-only few-shot examples enhance VLMs' alignment with the task format, particularly benefiting models prone to verbose zero-shot answers. Lastly, to mitigate the challenges associated with evaluating free-form open-ended VQA responses using string-matching based VQA metrics, we introduce a straightforward LLM-guided pre-processing technique to adapt the model responses to the expected ground-truth answer distribution. In summary, our research sheds light on the intricacies of prompting strategies in VLMs for VQA, emphasizing the synergistic use of captions, templates, and pre-processing to enhance model efficacy.",8efc20988021ce3b4b05dd44b13e27260ee9b99b,Semantic Scholar,,, zeroshot temporal relation extraction with chatgpt,"['Chenhan Yuan', 'Qianqian Xie', 'S. Ananiadou']",http://arxiv.org/pdf/2304.05454,2023-04-11,,"The goal of temporal relation extraction is to infer the temporal relation between two events in the document. Supervised models are dominant in this task. In this work, we investigate ChatGPT’s ability on zero-shot temporal relation extraction. We designed three different prompt techniques to break down the task and evaluate ChatGPT. Our experiments show that ChatGPT’s performance has a large gap with that of supervised methods and can heavily rely on the design of prompts. We further demonstrate that ChatGPT can infer more small relation classes correctly than supervised methods. The current shortcomings of ChatGPT on temporal relation extraction are also discussed in this paper. We found that ChatGPT cannot keep consistency during temporal inference and it fails in actively long-dependency temporal inference.",9087b835d92b72ab3208888916585ddce81c9d10,Semantic Scholar,,, enabling conversational interaction with mobile ui using large language models,"['Bryan Wang', 'Gang Li', 'Yang Li']",https://dl.acm.org/doi/pdf/10.1145/3544548.3580895,2022-09-18,,"Conversational agents show the promise to allow users to interact with mobile devices using language. However, to perform diverse UI tasks with natural language, developers typically need to create separate datasets and models for each specific task, which is expensive and effort-consuming. Recently, pre-trained large language models (LLMs) have been shown capable of generalizing to various downstream tasks when prompted with a handful of examples from the target task. This paper investigates the feasibility of enabling versatile conversational interactions with mobile UIs using a single LLM. We designed prompting techniques to adapt an LLM to mobile UIs. We experimented with four important modeling tasks that address various scenarios in conversational interaction. Our method achieved competitive performance on these challenging tasks without requiring dedicated datasets and training, offering a lightweight and generalizable approach to enable language-based mobile interaction.",99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789,Semantic Scholar,,, questioning the survey responses of large language models,"['Ricardo Dominguez-Olmedo', 'Moritz Hardt', 'Celestine Mendler-Dunner']",https://arxiv.org/pdf/2306.07951,2023-06-13,,"As large language models increase in capability, researchers have started to conduct surveys of all kinds on these models with varying scientific motivations. In this work, we examine what we can learn from language models' survey responses on the basis of the well-established American Community Survey (ACS) by the U.S. Census Bureau. Using a de-facto standard multiple-choice prompting technique and evaluating 40 different language models, hundreds of thousands of times each on questions from the ACS, we systematically establish two dominant patterns. First, models have significant position and labeling biases, for example, towards survey responses labeled with the letter""A"". Second, when adjusting for labeling biases through randomized answer ordering, models across the board trend towards uniformly random survey responses. In fact, binary classifiers can almost perfectly differentiate between models' responses to the ACS and the responses of the US census. Taken together, our findings suggest caution in treating survey responses from language models as equivalent to those of human populations at present time.",a86e12654376323b712dd3d39d5ff22283f87a7b,Semantic Scholar,,, mathprompter mathematical reasoning using large language models,"['Shima Imani', 'Liang Du', 'H. Shrivastava']",http://arxiv.org/pdf/2303.05398,2023-03-04,,"Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose ‘MathPrompter’, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the ‘MultiArith’ dataset (78.7% - 92.5%) evaluated using 175B parameter GPT-based LLM.",b626560f19f815808a289ef5c24a17c57320da70,Semantic Scholar,,, boosting logical reasoning in large language models through a new framework the graph of thought,"['Bin Lei', 'Pei-Hung Lin', 'C. Liao', 'Caiwen Ding']",https://arxiv.org/pdf/2308.08614,2023-08-16,,"Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of \textit{prompting engineering} to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed \textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, \textit{Tree of Thought (ToT)}, our approach registered an average accuracy boost of $23\%$, $24\%$, and $15\%$.",ba4aa83248a1d08b521392eb971e47d10b7c74e1,Semantic Scholar,,, scitab a challenging benchmark for compositional reasoning and claim verification on scientific tables,"['Xinyuan Lu', 'Liangming Pan', 'Qian Liu', 'Preslav Nakov', 'Min-Yen Kan']",http://arxiv.org/pdf/2305.13186,2023-05-22,,"Current scientific fact-checking benchmarks exhibit several shortcomings, such as biases arising from crowd-sourced claims and an over-reliance on text-based evidence. We present SCITAB, a challenging evaluation dataset consisting of 1.2K expert-verified scientific claims that 1) originate from authentic scientific publications and 2) require compositional reasoning for verification. The claims are paired with evidence-containing scientific tables annotated with labels. Through extensive evaluations, we demonstrate that SCITAB poses a significant challenge to state-of-the-art models, including table-based pretraining models and large language models. All models except GPT-4 achieved performance barely above random guessing. Popular prompting techniques, such as Chain-of-Thought, do not achieve much performance gains on SCITAB. Our analysis uncovers several unique challenges posed by SCITAB, including table grounding, claim ambiguity, and compositional reasoning. Our codes and data are publicly available at https://github.com/XinyuanLu00/SciTab.",c20b18d6b919695a69e416debf8bf1ffeac03992,Semantic Scholar,,, optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models,"['Badr AlKhamissi', 'Siddharth Verma', 'Ping Yu', 'Zhijing Jin', 'Asli Celikyilmaz', 'Mona T. Diab']",https://aclanthology.org/2023.nlrse-1.10.pdf,2023-05-19,,"We conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations. We then evaluate all models on 57 out-of-domain tasks drawn from the Super-NaturalInstructions benchmark, covering 26 distinct reasoning skills, utilizing three prompting techniques. Through a comprehensive grid of 27 configurations and 6,156 test evaluations, we investigate the dimensions of finetuning, prompting, and scale to understand the role of explanations on different reasoning skills. Our findings reveal that having explanations in the fewshot exemplar has no significant impact on the model’s performance when the model is finetuned, while positively affecting the non-finetuned counterpart. Moreover, we observe a slight yet consistent increase in classification accuracy as we incorporate explanations during prompting and finetuning, respectively. Finally, we offer insights on which reasoning skills benefit the most from incorporating explanations during finetuning and prompting, such as Numerical (+20.4%) and Analogical (+13.9%) reasoning, as well as skills that exhibit negligible or negative effects.",c218cd1772999517b137bbbc9872c4f67e540b7f,Semantic Scholar,,, knowledgeprompted estimator a novel approach to explainable machine translation assessment,"['Hao Yang', 'Min Zhang', 'Shimin Tao', 'Minghan Wang', 'Daimeng Wei', 'Yanfei Jiang']",http://arxiv.org/pdf/2306.07486,2023-06-13,,"Cross-lingual Machine Translation (MT) quality estimation plays a crucial role in evaluating translation performance. GEMBA, the first MT quality assessment metric based on Large Language Models (LLMs), employs one-step prompting to achieve state-of-the-art (SOTA) in system-level MT quality estimation; however, it lacks segment-level analysis. In contrast, Chain-of-Thought (CoT) prompting outperforms one-step prompting by offering improved reasoning and explainability. In this paper, we introduce Knowledge-Prompted Estimator (KPE), a CoT prompting method that combines three one-step prompting techniques, including perplexity, token-level similarity, and sentence-level similarity. This method attains enhanced performance for segment-level estimation compared with previous deep learning models and one-step prompting approaches. Furthermore, supplementary experiments on word-level visualized alignment demonstrate that our KPE method significantly improves token alignment compared with earlier models and provides better interpretability for MT quality estimation. Code will be released upon publication.",d1bd7ae97588eccfbcd31ffce4fc924d12a5de4d,Semantic Scholar,,, prompting as probing using language models for knowledge base construction,"['Dimitrios Alivanistos', ""Selene B'aez Santamar'ia"", 'Michael Cochez', 'Jan-Christoph Kalo', 'Emile van Krieken', 'Thiviyan Thanapalasingam']",http://arxiv.org/pdf/2208.11057,2022-08-23,,"Language Models (LMs) have proven to be useful in various downstream applications, such as summarisation, translation, question answering and text classification. LMs are becoming increasingly important tools in Artificial Intelligence, because of the vast quantity of information they can store. In this work, we present ProP (Prompting as Probing), which utilizes GPT-3, a large Language Model originally proposed by OpenAI in 2020, to perform the task of Knowledge Base Construction (KBC). ProP implements a multi-step approach that combines a variety of prompting techniques to achieve this. Our results show that manual prompt curation is essential, that the LM must be encouraged to give answer sets of variable lengths, in particular including empty answer sets, that true/false questions are a useful device to increase precision on suggestions generated by the LM, that the size of the LM is a crucial factor, and that a dictionary of entity aliases improves the LM score. Our evaluation study indicates that these proposed techniques can substantially enhance the quality of the final predictions: ProP won track 2 of the LM-KBC competition, outperforming the baseline by 36.4 percentage points. Our implementation is available on https://github.com/HEmile/iswc-challenge.",ddc9aeac18638575bbb90ede4c6829ec15c2947e,Semantic Scholar,,, upar a kantianinspired prompting framework for enhancing large language model capabilities,"['Hejia Geng', 'Boxun Xu', 'Peng Li']",https://arxiv.org/pdf/2310.01441,2023-09-30,,"Large Language Models (LLMs) have demonstrated impressive inferential capabilities, with numerous research endeavors devoted to enhancing this capacity through prompting. Despite these efforts, a unified epistemological foundation is still conspicuously absent. Drawing inspiration from Kant's a priori philosophy, we propose the UPAR prompting framework, designed to emulate the structure of human cognition within LLMs. The UPAR framework is delineated into four phases:""Understand"",""Plan"",""Act"", and""Reflect"", enabling the extraction of structured information from complex contexts, prior planning of solutions, execution according to plan, and self-reflection. This structure significantly augments the explainability and accuracy of LLM inference, producing a human-understandable and inspectable inferential trajectory. Furthermore, our work offers an epistemological foundation for existing prompting techniques, allowing for a possible systematic integration of these methods. With GPT-4, our approach elevates the accuracy from COT baseline of 22.92% to 58.33% in a challenging subset of GSM8K, and from 67.91% to 75.40% in the causal judgment task. Without using few-shot examples or external tools, UPAR significantly outperforms existing prompting methods on SCIBENCH, a challenging dataset containing collegiate-level mathematics, chemistry, and physics scientific problems.",e61a96cf602ebff6683929aaf916e25614a475bc,Semantic Scholar,,, understanding stereotypes in language models towards robust measurement and zeroshot debiasing,"['Justus Mattern', 'Zhijing Jin', 'Mrinmaya Sachan', 'Rada Mihalcea', 'B. Scholkopf']",http://arxiv.org/pdf/2212.10678,2022-12-20,,"Generated texts from large pretrained language models have been shown to exhibit a variety of harmful, human-like biases about various demographics. These findings prompted large efforts aiming to understand and measure such effects, with the goal of providing benchmarks that can guide the development of techniques mitigating these stereotypical associations. However, as recent research has pointed out, the current benchmarks lack a robust experimental setup, consequently hindering the inference of meaningful conclusions from their evaluation metrics. In this paper, we extend these arguments and demonstrate that existing techniques and benchmarks aiming to measure stereotypes tend to be inaccurate and consist of a high degree of experimental noise that severely limits the knowledge we can gain from benchmarking language models based on them. Accordingly, we propose a new framework for robustly measuring and quantifying biases exhibited by generative language models. Finally, we use this framework to investigate GPT-3's occupational gender bias and propose prompting techniques for mitigating these biases without the need for fine-tuning.",ed5ebed7ff668fd7362d531a40b49b3aea33b3a9,Semantic Scholar,,, prompts should not be seen as secrets systematically measuring prompt extraction attack success,"['Yiming Zhang', 'Daphne Ippolito']",https://arxiv.org/pdf/2307.06865,2023-07-13,,"The generations of large language models are commonly controlled through prompting techniques, where a user's query to the model is prefixed with a prompt that aims to guide the model's behaviour on the query. The prompts used by companies to guide their models are often treated as secrets, to be hidden from the user making the query. They have even been treated as commodities to be bought and sold. However, there has been anecdotal evidence showing that the prompts can be extracted by a user even when they are kept secret. In this paper, we present a framework for systematically measuring the success of prompt extraction attacks. In experiments with multiple sources of prompts and multiple underlying language models, we find that simple text-based attacks can in fact reveal prompts with high probability.",f330f502bf1e92fabf7f246597fa9320d956c0c8,Semantic Scholar,,, minidalle3 interactive text to image by prompting large language models,"['Zeqiang Lai', 'Xizhou Zhu', 'Jifeng Dai', 'Yu Qiao', 'Wenhai Wang']",https://arxiv.org/pdf/2310.07653,2023-10-11,,"The revolution of artificial intelligence content generation has been rapidly accelerated with the booming text-to-image (T2I) diffusion models. Within just two years of development, it was unprecedentedly of high-quality, diversity, and creativity that the state-of-the-art models could generate. However, a prevalent limitation persists in the effective communication with these popular T2I models, such as Stable Diffusion, using natural language descriptions. This typically makes an engaging image hard to obtain without expertise in prompt engineering with complex word compositions, magic tags, and annotations. Inspired by the recently released DALLE3 - a T2I model directly built-in ChatGPT that talks human language, we revisit the existing T2I systems endeavoring to align human intent and introduce a new task - interactive text to image (iT2I), where people can interact with LLM for interleaved high-quality image generation/edit/refinement and question answering with stronger images and text correspondences using natural language. In addressing the iT2I problem, we present a simple approach that augments LLMs for iT2I with prompting techniques and off-the-shelf T2I models. We evaluate our approach for iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a convenient and low-cost way to introduce the iT2I ability for any existing LLMs and any text-to-image models without any training while bringing little degradation on LLMs' inherent capabilities in, e.g., question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I systems.",f669d7a6fab0147253178a6fc854e05e3d92fb3f,Semantic Scholar,,, gopro generate and optimize prompts in clip using selfsupervised learning,"['M. Singha', 'Ankit Jha', 'Biplab Banerjee']",https://arxiv.org/pdf/2308.11605,2023-08-22,,"Large-scale foundation models, such as CLIP, have demonstrated remarkable success in visual recognition tasks by embedding images in a semantically rich space. Self-supervised learning (SSL) has also shown promise in improving visual recognition by learning invariant features. However, the combination of CLIP with SSL is found to face challenges due to the multi-task framework that blends CLIP's contrastive loss and SSL's loss, including difficulties with loss weighting and inconsistency among different views of images in CLIP's output space. To overcome these challenges, we propose a prompt learning-based model called GOPro, which is a unified framework that ensures similarity between various augmented views of input images in a shared image-text embedding space, using a pair of learnable image and text projectors atop CLIP, to promote invariance and generalizability. To automatically learn such prompts, we leverage the visual content and style primitives extracted from pre-trained CLIP and adapt them to the target task. In addition to CLIP's cross-domain contrastive loss, we introduce a visual contrastive loss and a novel prompt consistency loss, considering the different views of the images. GOPro is trained end-to-end on all three loss objectives, combining the strengths of CLIP and SSL in a principled manner. Empirical evaluations demonstrate that GOPro outperforms the state-of-the-art prompting techniques on three challenging domain generalization tasks across multiple benchmarks by a significant margin. Our code is available at https://github.com/mainaksingha01/GOPro.",fc9bd3642df2a378c11131362b27deecbd02b70a,Semantic Scholar,,, the devil is in the errors leveraging large language models for finegrained machine translation evaluation,"['Patrick Fernandes', 'Daniel Deutsch', 'M. Finkelstein', 'Parker Riley', 'André F. T. Martins', 'Graham Neubig', 'Ankush Garg', 'J. Clark', 'Markus Freitag', 'Orhan Firat']",https://arxiv.org/pdf/2308.07286,2023-08-14,,"Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.",fd80f7f3673fc6ca02f192d5d73426f11a4be659,Semantic Scholar,,, "multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering","['Angus Addlesee', ""Weronika Siei'nska"", 'Nancie Gunson', 'Daniel Hernández García', 'C. Dondrup', 'Oliver Lemon']",https://arxiv.org/pdf/2308.15231,2023-08-29,,"This paper evaluates the extent to which current LLMs can capture task-oriented multi-party conversations (MPCs). We have recorded and transcribed 29 MPCs between patients, their companions, and a social robot in a hospital. We then annotated this corpus for multi-party goal-tracking and intent-slot recognition. People share goals, answer each other’s goals, and provide other people’s goals in MPCs - none of which occur in dyadic interactions. To understand user goals in MPCs, we compared three methods in zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks to train DialogLM using LED, and employed prompt engineering techniques with GPT-3.5-turbo, to determine which approach can complete this novel task with limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot setting. The ‘reasoning’ style prompt, when given 7% of the corpus as example annotated conversations, was the best performing method. It correctly annotated 62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition MPCs. A ‘story’ style prompt increased model hallucination, which could be detrimental if deployed in safety-critical settings. We conclude that multi-party conversations still challenge state-of-the-art LLMs.",8a1a8290f7d42b0ce60445a4c0130ef737b3ff69,Semantic Scholar,,, llm4vv developing llmdriven testsuite for compiler validation,"['Christian Munley', 'Aaron Jarmusch', 'Sunita Chandrasekaran']",https://arxiv.org/pdf/2310.04963,2023-10-08,,"Large language models (LLMs) are a new and powerful tool for a wide span of applications involving natural language and demonstrate impressive code generation abilities. In this paper, we explore the capabilitity of state-of-the-art LLMs, including closed-source options like OpenAI GPT-4 and open-source alternatives like Meta AI Codellama, to automatically generate tests and use these tests to validate and verify compiler implementations of a directive-based programming paradigm, OpenACC. Our approach entails exploring various prompt engineering techniques including a code template, retrieval-augmented generation (RAG) with code template, expressive prompt using RAG with code template, one-shot example, and RAG with one-shot example. This paper focusses on (a) exploring the capabilities of the latest LLMs for code generation, (b) investigating prompt and fine tuning methods, and (c) analyzing the outcome of LLMs generated tests",8c52b3bbe5897ba3f42b38c5bfc33bbd48f9a1f2,Semantic Scholar,,, "voice visual oracle for interaction, conversation, and explanation","['Donggang Jia', 'Alexandra Irger', 'Ondrej Strnad', 'Johanna Björklund', 'A. Ynnerman', 'I. Viola']",http://arxiv.org/pdf/2304.04083,2023-04-08,,"We present VOICE, a novel approach to science communication that connects large language models' (LLM) conversational capabilities with interactive exploratory visualization. VOICE introduces several innovative technical contributions that drive our conversational visualization framework. Our foundation is a pack-of-bots that can perform specific tasks, such as assigning tasks, extracting instructions, and generating coherent content. We employ fine-tuning and prompt engineering techniques to tailor bots' performance to their specific roles and accurately respond to user queries. Our interactive text-to-visualization method generates a flythrough sequence matching the content explanation. Besides, natural language interaction provides capabilities to navigate and manipulate the 3D models in real-time. The VOICE framework can receive arbitrary voice commands from the user and respond verbally, tightly coupled with corresponding visual representation with low latency and high accuracy. We demonstrate the effectiveness of our approach by applying it to the molecular visualization domain: analyzing three 3D molecular models with multi-scale and multi-instance attributes. We finally evaluate VOICE with the identified educational experts to show the potential of our approach. All supplemental materials are available at https://osf.io/g7fbr.",8ca384547bb4b21b7f38d478119bf3168eb9c9cd,Semantic Scholar,,, "unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing",['Walid Hariri'],http://arxiv.org/pdf/2304.02017,2023-03-27,,"Large language models have revolutionized the field of artificial intelligence and have been used in various applications. Among these models, ChatGPT (Chat Generative Pre-trained Transformer) has been developed by OpenAI, it stands out as a powerful tool that has been widely adopted. ChatGPT has been successfully applied in numerous areas, including chatbots, content generation, language translation, personalized recommendations, and even medical diagnosis and treatment. Its success in these applications can be attributed to its ability to generate human-like responses, understand natural language, and adapt to different contexts. Its versatility and accuracy make it a powerful tool for natural language processing (NLP). However, there are also limitations to ChatGPT, such as its tendency to produce biased responses and its potential to perpetuate harmful language patterns. This article provides a comprehensive overview of ChatGPT, its applications, advantages, and limitations. Additionally, the paper emphasizes the importance of ethical considerations when using this robust tool in real-world scenarios. Finally, This paper contributes to ongoing discussions surrounding artificial intelligence and its impact on vision and NLP domains by providing insights into prompt engineering techniques.",9e93ab728e3e174ec1492009055885a9123d434f,Semantic Scholar,,, simulating hp lovecraft horror literature with the chatgpt large language model,"['E.C. Garrido-Merchán', 'J. L. Arroyo-Barrigüete', 'Roberto Gozalo-Brizuela']",http://arxiv.org/pdf/2305.03429,2023-05-05,,"In this paper, we present a novel approach to simulating H.P. Lovecraft's horror literature using the ChatGPT large language model, specifically the GPT-4 architecture. Our study aims to generate text that emulates Lovecraft's unique writing style and themes, while also examining the effectiveness of prompt engineering techniques in guiding the model's output. To achieve this, we curated a prompt containing several specialized literature references and employed advanced prompt engineering methods. We conducted an empirical evaluation of the generated text by administering a survey to a sample of undergraduate students. Utilizing statistical hypothesis testing, we assessed the students ability to distinguish between genuine Lovecraft works and those generated by our model. Our findings demonstrate that the participants were unable to reliably differentiate between the two, indicating the effectiveness of the GPT-4 model and our prompt engineering techniques in emulating Lovecraft's literary style. In addition to presenting the GPT model's capabilities, this paper provides a comprehensive description of its underlying architecture and offers a comparative analysis with related work that simulates other notable authors and philosophers, such as Dennett. By exploring the potential of large language models in the context of literary emulation, our study contributes to the body of research on the applications and limitations of these models in various creative domains.",a7d8a6d8c04bd4554da4219be0f9d3bf87e2e56b,Semantic Scholar,,, protect your prompts protocols for ip protection in llm applications,"['M. V. Wyk', 'M. Bekker', 'X. L. Richards', 'K. Nixon']",http://arxiv.org/pdf/2306.06297,2023-06-09,,"With the rapid adoption of AI in the form of large language models (LLMs), the potential value of carefully engineered prompts has become significant. However, to realize this potential, prompts should be tradable on an open market. Since prompts are, at present, generally economically non-excludable, by virtue of their nature as text, no general competitive market has yet been established. This note discusses two protocols intended to provide protection of prompts, elevating their status as intellectual property, thus confirming the intellectual property rights of prompt engineers, and potentially supporting the flourishing of an open market for LLM prompts.",08fd45ac85916b95f734cc75af8660cff73c33ca,Semantic Scholar,,, abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models,"['Mohi Reza', 'Nathan Laundry', 'Ilya Musabirov', 'Peter Dushniku', 'Zhi Yuan Michael Yu', 'Kashish Mittal', 'Tovi Grossman', 'Michael Liut', 'Anastasia Kuzminykh', 'Joseph Jay Williams']",https://arxiv.org/pdf/2310.00117,2023-09-29,,"Exploring alternative ideas by rewriting text is integral to the writing process. State-of-the-art large language models (LLMs) can simplify writing variation generation. However, current interfaces pose challenges for simultaneous consideration of multiple variations: creating new versions without overwriting text can be difficult, and pasting them sequentially can clutter documents, increasing workload and disrupting writers' flow. To tackle this, we present ABScribe, an interface that supports rapid, yet visually structured, exploration of writing variations in human-AI co-writing tasks. With ABScribe, users can swiftly produce multiple variations using LLM prompts, which are auto-converted into reusable buttons. Variations are stored adjacently within text segments for rapid in-place comparisons using mouse-over interactions on a context toolbar. Our user study with 12 writers shows that ABScribe significantly reduces task workload (d = 1.20, p<0.001), enhances user perceptions of the revision process (d = 2.41, p<0.001) compared to a popular baseline workflow, and provides insights into how writers explore variations using LLMs.",0f71c1e2acf286951544d3bd9eb5d85acfba5af1,Semantic Scholar,,, incontext impersonation reveals large language models' strengths and biases,"['Leonard Salewski', 'Stephan Alaniz', 'Isabel Rio-Torto', 'Eric Schulz', 'Zeynep Akata']",http://arxiv.org/pdf/2305.14930,2023-05-24,,"In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their hidden strengths and biases.",19c63eade265d8a47d160098d97194b3b83d3770,Semantic Scholar,,, chatgpt for plcdcs control logic generation,"['Heiko Koziolek', 'Sten Gruener', 'Virendra Ashiwal']",https://arxiv.org/pdf/2305.15809,2023-05-25,,"Large language models (LLMs) providing generative AI have become popular to support software engineers in creating, summarizing, optimizing, and documenting source code. It is still unknown how LLMs can support control engineers using typical control programming languages in programming tasks. Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code generation but did not yet tackle control logic programming. A key contribution of this paper is an exploratory study, for which we created 100 LLM prompts in 10 representative categories to analyze control logic generation for of PLCs and DCS from natural language. We tested the prompts by generating answers with ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3 Structured Text code in many cases and demonstrated useful reasoning skills that could boost control engineer productivity. Our prompt collection is the basis for a more formal LLM benchmark to test and compare such models for control logic generation.",1c1b83df13de4334e48a4c2039bc7ddfa374c486,Semantic Scholar,,, saytap language to quadrupedal locomotion,"['Yujin Tang', 'Wenhao Yu', 'Jie Tan', 'H. Zen', 'Aleksandra Faust', 'Tatsuya Harada']",https://arxiv.org/pdf/2306.07580,2023-06-13,,"Large language models (LLMs) have demonstrated the potential to perform high-level planning. Yet, it remains a challenge for LLMs to comprehend low-level commands, such as joint angle targets or motor torques. This paper proposes an approach to use foot contact patterns as an interface that bridges human commands in natural language and a locomotion controller that outputs these low-level commands. This results in an interactive system for quadrupedal robots that allows the users to craft diverse locomotion behaviors flexibly. We contribute an LLM prompt design, a reward function, and a method to expose the controller to the feasible distribution of contact patterns. The results are a controller capable of achieving diverse locomotion patterns that can be transferred to real robot hardware. Compared with other design choices, the proposed approach enjoys more than 50% success rate in predicting the correct contact patterns and can solve 10 more tasks out of a total of 30 tasks. Our project site is: https://saytap.github.io.",1fc21645ccc8e99eb8162e5f91407148b7f77e3d,Semantic Scholar,,, "mmhqaicl multimodal incontext learning for hybrid question answering over text, tables and images","['Weihao Liu', 'Fangyu Lei', 'Tongxu Luo', 'Jiahe Lei', 'Shizhu He', 'Jun Zhao', 'Kang Liu']",https://arxiv.org/pdf/2309.04790,2023-09-09,,"In the real world, knowledge often exists in a multimodal and heterogeneous form. Addressing the task of question answering with hybrid data types, including text, tables, and images, is a challenging task (MMHQA). Recently, with the rise of large language models (LLM), in-context learning (ICL) has become the most popular way to solve QA problems. We propose MMHQA-ICL framework for addressing this problems, which includes stronger heterogeneous data retriever and an image caption module. Most importantly, we propose a Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage their powerful performance in this task. We are the first to use end-to-end LLM prompting method for this task. Experimental results demonstrate that our framework outperforms all baselines and methods trained on the full dataset, achieving state-of-the-art results under the few-shot setting on the MultimodalQA dataset.",27d6d02e24de259e3aa38e556a81f89ec505816e,Semantic Scholar,,, lmcanvas objectoriented interaction to personalize large language modelpowered writing environments,"['Tae Soo Kim', 'Arghya Sarkar', 'Yoonjoo Lee', 'Minsuk Chang', 'Juho Kim']",http://arxiv.org/pdf/2303.15125,2023-03-27,,"Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers' workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs -- requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with""blocks""in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.",2cdff023cd4b185bb452f3c7399580db2d0fdfcd,Semantic Scholar,,, flocks of stochastic parrots differentially private prompt learning for large language models,"['Haonan Duan', 'Adam Dziedzic', 'Nicolas Papernot', 'Franziska Boenisch']",http://arxiv.org/pdf/2305.15594,2023-05-24,,"Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs.",2f2a430ba6c93bcfaf4818316ff8a27b1e034b1a,Semantic Scholar,,, knowledge crosswords geometric reasoning over structured knowledge with large language models,"['Wenxuan Ding', 'Shangbin Feng', 'Yuhan Liu', 'Zhaoxuan Tan', 'Vidhisha Balachandran', 'Tianxing He', 'Yulia Tsvetkov']",https://arxiv.org/pdf/2310.01290,2023-10-02,,"Large language models (LLMs) are widely adopted in knowledge-intensive tasks and have achieved impressive performance thanks to their knowledge abilities. While LLMs have demonstrated outstanding performance on atomic or linear (multi-hop) QA tasks, whether they can reason in knowledge-rich scenarios with interweaving constraints remains an underexplored problem. In this work, we propose geometric reasoning over structured knowledge, where pieces of knowledge are connected in a graph structure and models need to fill in the missing information. Such geometric knowledge reasoning would require the ability to handle structured knowledge, reason with uncertainty, verify facts, and backtrack when an error occurs. We propose Knowledge Crosswords, a multi-blank QA dataset where each problem consists of a natural language question representing the geometric constraints of an incomplete entity network, where LLMs are tasked with working out the missing entities while meeting all factual constraints. Knowledge Crosswords contains 2,101 individual problems, covering various knowledge domains and further divided into three difficulty levels. We conduct extensive experiments to evaluate existing LLM prompting approaches on the Knowledge Crosswords benchmark. We additionally propose two new approaches, Staged Prompting and Verify-All, to augment LLMs' ability to backtrack and verify structured constraints. Our results demonstrate that while baseline approaches perform well on easier problems but struggle with hard ones, our proposed Verify-All outperforms other methods by a large margin and is more robust with hard problems. Further analysis reveals that LLMs' ability of geometric reasoning over structured knowledge is still far from robust or perfect, susceptible to confounders such as the order of options, certain structural patterns, assumption of existence of correct answer, and more.",33d944de189d6edf3a510ea195803a381c5a3bab,Semantic Scholar,,, gear augmenting language models with generalizable and efficient tool resolution,"['Yining Lu', 'Haoping Yu', 'Daniel Khashabi']",https://arxiv.org/pdf/2307.08775,2023-07-17,,"Augmenting large language models (LLM) to use external tools enhances their performance across a variety of tasks. However, prior works over-rely on task-specific demonstration of tool use that limits their generalizability and computational cost due to making many calls to large-scale LLMs. We introduce GEAR, a computationally efficient query-tool grounding algorithm that is generalizable to various tasks that require tool use while not relying on task-specific demonstrations. GEAR achieves better efficiency by delegating tool grounding and execution to small language models (SLM) and LLM, respectively; while leveraging semantic and pattern-based evaluation at both question and answer levels for generalizable tool grounding. We evaluate GEAR on 14 datasets across 6 downstream tasks, demonstrating its strong generalizability to novel tasks, tools and different SLMs. Despite offering more efficiency, GEAR achieves higher precision in tool grounding compared to prior strategies using LLM prompting, thus improving downstream accuracy at a reduced computational cost. For example, we demonstrate that GEAR-augmented GPT-J and GPT-3 outperform counterpart tool-augmented baselines because of better tool use.",3bd83ff979f3c0e9470f23c360a18333593dc5a1,Semantic Scholar,,, retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference,"['Zachary Levonian', 'Chenglu Li', 'Wangda Zhu', 'Anoushka Gade', 'Owen Henkel', 'Millie-Ellen Postle', 'Wanli Xing']",https://arxiv.org/pdf/2310.03184,2023-10-04,,"For middle-school math students, interactive question-answering (QA) with tutors is an effective way to learn. The flexibility and emergent capabilities of generative large language models (LLMs) has led to a surge of interest in automating portions of the tutoring process - including interactive QA to support conceptual discussion of mathematical concepts. However, LLM responses to math questions can be incorrect or mismatched to the educational context - such as being misaligned with a school's curriculum. One potential solution is retrieval-augmented generation (RAG), which involves incorporating a vetted external knowledge source in the LLM prompt to increase response quality. In this paper, we designed prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions. We evaluate the efficacy of this RAG system for middle-school algebra and geometry QA by administering a multi-condition survey, finding that humans prefer responses generated using RAG, but not when responses are too grounded in the textbook content. We argue that while RAG is able to improve response quality, designers of math QA systems must consider trade-offs between generating responses preferred by students and responses closely matched to specific educational resources.",3dc1b657bf821b731c5ed0396823b67c10d54ba1,Semantic Scholar,,, udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers,"['Jon Saad-Falcon', 'O. Khattab', 'Keshav Santhanam', 'Radu Florian', 'M. Franz', 'S. Roukos', 'Avirup Sil', 'Md Arafat Sultan', 'Christopher Potts']",https://arxiv.org/pdf/2303.00807,2023-03-01,,"Many information retrieval tasks require large labeled datasets for fine-tuning. However, such datasets are often unavailable, and their utility for real-world applications can diminish quickly due to domain shifts. To address this challenge, we develop and motivate a method for using large language models (LLMs) to generate large numbers of synthetic queries cheaply. The method begins by generating a small number of synthetic queries using an expensive LLM. After that, a much less expensive one is used to create large numbers of synthetic queries, which are used to fine-tune a family of reranker models. These rerankers are then distilled into a single efficient retriever for use in the target domain. We show that this technique boosts zero-shot accuracy in long-tail domains and achieves substantially lower latency than standard reranking methods.",44b0d2e884efa5344e50424dbe2edf616981f201,Semantic Scholar,,, iterative zeroshot llm prompting for knowledge graph construction,"['S. Carta', 'Alessandro Giuliani', 'L. piano', 'Alessandro Sebastian Podda', 'Livio Pompianu', 'Sandro Gabriele Tiddia']",http://arxiv.org/pdf/2307.01128,2023-07-03,,"In the current digitalization era, capturing and effectively representing knowledge is crucial in most real-world scenarios. In this context, knowledge graphs represent a potent tool for retrieving and organizing a vast amount of information in a properly interconnected and interpretable structure. However, their generation is still challenging and often requires considerable human effort and domain expertise, hampering the scalability and flexibility across different application fields. This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models, such as GPT-3.5, that can address all the main critical issues in knowledge graph building. The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies in the main stages of the generation process. Our unique manifold approach may encompass significant benefits to the scientific community. In particular, the main contribution can be summarized by: (i) an innovative strategy for iteratively prompting large language models to extract relevant components of the final graph; (ii) a zero-shot strategy for each prompt, meaning that there is no need for providing examples for""guiding""the prompt result; (iii) a scalable solution, as the adoption of LLMs avoids the need for any external resources or human expertise. To assess the effectiveness of our proposed model, we performed experiments on a dataset that covered a specific domain. We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.",50bdea5132ef4b8cf25b0d9f3ac2ee0d09bf18cb,Semantic Scholar,,, rosgpt_vision commanding robots using only language models' prompts,"['Bilel Benjdira', 'A. Koubâa', 'Anas M. Ali']",https://arxiv.org/pdf/2308.11236,2023-08-22,,"In this paper, we argue that the next generation of robots can be commanded using only Language Models' prompts. Every prompt interrogates separately a specific Robotic Modality via its Modality Language Model (MLM). A central Task Modality mediates the whole communication to execute the robotic mission via a Large Language Model (LLM). This paper gives this new robotic design pattern the name of: Prompting Robotic Modalities (PRM). Moreover, this paper applies this PRM design pattern in building a new robotic framework named ROSGPT_Vision. ROSGPT_Vision allows the execution of a robotic task using only two prompts: a Visual and an LLM prompt. The Visual Prompt extracts, in natural language, the visual semantic features related to the task under consideration (Visual Robotic Modality). Meanwhile, the LLM Prompt regulates the robotic reaction to the visual description (Task Modality). The framework automates all the mechanisms behind these two prompts. The framework enables the robot to address complex real-world scenarios by processing visual data, making informed decisions, and carrying out actions automatically. The framework comprises one generic vision module and two independent ROS nodes. As a test application, we used ROSGPT_Vision to develop CarMate, which monitors the driver's distraction on the roads and makes real-time vocal notifications to the driver. We showed how ROSGPT_Vision significantly reduced the development cost compared to traditional methods. We demonstrated how to improve the quality of the application by optimizing the prompting strategies, without delving into technical details. ROSGPT_Vision is shared with the community (link: https://github.com/bilel-bj/ROSGPT_Vision) to advance robotic research in this direction and to build more robotic frameworks that implement the PRM design pattern and enables controlling robots using only prompts.",53e8d327e7ceda6f4efd321752da57edbaee6257,Semantic Scholar,,, teler a general taxonomy of llm prompts for benchmarking complex tasks,"['Shubhra (Santu) Karmaker', 'Dongji Feng']",http://arxiv.org/pdf/2305.11430,2023-05-19,,"While LLMs have shown great success in understanding and generating text in traditional conversational settings, their potential for performing ill-defined complex tasks is largely under-studied. Indeed, we are yet to conduct comprehensive benchmarking studies with multiple LLMs that are exclusively focused on a complex task. However, conducting such benchmarking studies is challenging because of the large variations in LLMs' performance when different prompt types/styles are used and different degrees of detail are provided in the prompts. To address this issue, the paper proposes a general taxonomy that can be used to design prompts with specific properties in order to perform a wide range of complex tasks. This taxonomy will allow future benchmarking studies to report the specific categories of prompts used as part of the study, enabling meaningful comparisons across different studies. Also, by establishing a common standard through this taxonomy, researchers will be able to draw more accurate conclusions about LLMs' performance on a specific complex task.",5645502d73c6907f1671923638773152e55bfb00,Semantic Scholar,,, mathdial a dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems,"['Jakub Macina', 'Nico Daheim', 'Sankalan Pal Chowdhury', 'Tanmay Sinha', 'Manu Kapur', 'Iryna Gurevych', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2305.14536,2023-05-23,,"While automatic dialogue tutors hold great potential in making education personalized and more accessible, research on such systems has been hampered by a lack of sufficiently large and high-quality datasets. Collecting such datasets remains challenging, as recording tutoring sessions raises privacy concerns and crowdsourcing leads to insufficient data quality. To address this, we propose a framework to generate such dialogues by pairing human teachers with a Large Language Model (LLM) prompted to represent common student errors. We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues grounded in multi-step math reasoning problems. While models like GPT-3 are good problem solvers, they fail at tutoring because they generate factually incorrect feedback or are prone to revealing solutions to students too early. To overcome this, we let teachers provide learning opportunities to students by guiding them using various scaffolding questions according to a taxonomy of teacher moves. We demonstrate MathDial and its extensive annotations can be used to finetune models to be more effective tutors (and not just solvers). We confirm this by automatic and human evaluation, notably in an interactive setting that measures the trade-off between student solving success and telling solutions. The dataset is released publicly.",6cd26d124ffeb6ce301ef351aada27fa0852f81b,Semantic Scholar,,, retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering,"['Yike Wu', 'Nan Hu', 'Sheng Bi', 'G. Qi', 'J. Ren', 'Anhuan Xie', 'Wei Song']",https://arxiv.org/pdf/2309.11206,2023-09-20,,"Despite their competitive performance on knowledge-intensive tasks, large language models (LLMs) still have limitations in memorizing all world knowledge especially long tail knowledge. In this paper, we study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task that requires rich world knowledge. Existing work has shown that retrieving KG knowledge to enhance LLMs prompting can significantly improve LLMs performance in KGQA. However, their approaches lack a well-formed verbalization of KG knowledge, i.e., they ignore the gap between KG representations and textual representations. To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA. Based on this approach, we propose a KG-to-Text enhanced LLMs framework for solving the KGQA task. Experiments on several KGQA benchmarks show that the proposed KG-to-Text augmented LLMs approach outperforms previous KG-augmented LLMs approaches regarding answer accuracy and usefulness of knowledge statements.",785c0d4efd3aaa946f8bdcd12b38a147cc36b794,Semantic Scholar,,, federated large language model a position paper,"['Chaochao Chen', 'Xiaohua Feng', 'Jun Zhou', 'Jianwei Yin', 'Xiaolin Zheng']",https://arxiv.org/pdf/2307.08925,2023-07-18,,"Large scale language models (LLM) have received significant attention and found diverse applications across various domains, but their development encounters challenges in real-world scenarios. These challenges arise due to the scarcity of public domain data availability and the need to maintain privacy with respect to private domain data. To address these issues, federated learning (FL) has emerged as a promising technology that enables collaborative training of shared models while preserving decentralized data. We propose the concept of federated LLM, which comprises three key components, i.e., federated LLM pre-training, federated LLM fine-tuning, and federated LLM prompt engineering. For each component, we discuss its advantage over traditional LLM training methods and propose specific engineering strategies for implementation. Furthermore, we explore the novel challenges introduced by the integration of FL and LLM. We analyze existing solutions and identify potential obstacles faced by these solutions within the context of federated LLM.",7aad760762c4a10dfbc2d3391eb8bdb28c80b236,Semantic Scholar,,, adaplanner adaptive planning from feedback with language models,"['Haotian Sun', 'Yuchen Zhuang', 'Lingkai Kong', 'Bo Dai', 'Chao Zhang']",http://arxiv.org/pdf/2305.16653,2023-05-26,,"Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively.",8e37dc1215681aa153a51c07078ba8befd6a6e01,Semantic Scholar,,, simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation,"['J. Mendoncca', 'Patrícia Pereira', 'Joao Paulo Carvalho', 'A. Lavie', 'I. Trancoso']",https://arxiv.org/pdf/2308.16797,2023-08-31,,"Despite significant research effort in the development of automatic dialogue evaluation metrics, little thought is given to evaluating dialogues other than in English. At the same time, ensuring metrics are invariant to semantically similar responses is also an overlooked topic. In order to achieve the desired properties of robustness and multilinguality for dialogue evaluation metrics, we propose a novel framework that takes advantage of the strengths of current evaluation models with the newly-established paradigm of prompting Large Language Models (LLMs). Empirical results show our framework achieves state of the art results in terms of mean Spearman correlation scores across several benchmarks and ranks first place on both the Robust and Multilingual tasks of the DSTC11 Track 4 “Automatic Evaluation Metrics for Open-Domain Dialogue Systems”, proving the evaluation capabilities of prompted LLMs.",bcefc74b20649fd41ea05d87a3fa512d2559fc8d,Semantic Scholar,,, alpacafarm a simulation framework for methods that learn from human feedback,"['Yann Dubois', 'Xuechen Li', 'Rohan Taori', 'Tianyi Zhang', 'Ishaan Gulrajani', 'Jimmy Ba', 'Carlos Guestrin', 'Percy Liang', 'Tatsunori Hashimoto']",https://arxiv.org/pdf/2305.14387,2023-05-22,,"Large language models (LLMs) such as ChatGPT have seen widespread adoption due to their strong instruction-following abilities. Developing these LLMs involves a complex yet poorly understood workflow requiring training with human feedback. Replicating and understanding this instruction-following requires tackling three major challenges: the high cost of data collection, the lack of trustworthy evaluation, and the absence of reference method implementations. We address these challenges with AlpacaFarm, a simulator that enables research and development for learning from feedback at a low cost. First, we design LLM prompts to simulate human feedback that are 50x cheaper than crowdworkers and display high agreement with humans. Second, we propose an automatic evaluation and validate it against human instructions obtained on real-world interactions. Third, we contribute reference implementations for several methods (PPO, DPO, best-of-n, expert iteration, and more) that learn from pairwise feedback. Finally, as an end-to-end validation of AlpacaFarm, we train and evaluate eleven models on 10k pairs of real human feedback and show that rankings of models trained in AlpacaFarm match rankings of models trained on human data. As a demonstration of the research possible in AlpacaFarm, we find that methods that use a reward model can substantially improve over supervised fine-tuning and that our reference PPO implementation leads to a +10% improvement in win-rate against Davinci003. We release all components of AlpacaFarm at https://github.com/tatsu-lab/alpaca_farm.",cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa,Semantic Scholar,,, lpml llmprompting markup language for mathematical reasoning,"['Ryutaro Yamauchi', 'Sho Sonoda', 'Akiyoshi Sannai', 'Wataru Kumagai']",https://arxiv.org/pdf/2309.13078,2023-09-21,,"In utilizing large language models (LLMs) for mathematical reasoning, addressing the errors in the reasoning and calculation present in the generated text by LLMs is a crucial challenge. In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL). We discovered that by prompting LLMs to generate structured text in XML-like markup language, we could seamlessly integrate CoT and the external tool and control the undesired behaviors of LLMs. With our approach, LLMs can utilize Python computation to rectify errors within CoT. We applied our method to ChatGPT (GPT-3.5) to solve challenging mathematical problems and demonstrated that combining CoT and Python REPL through the markup language enhances the reasoning capability of LLMs. Our approach enables LLMs to write the markup language and perform advanced mathematical reasoning using only zero-shot prompting.",cf237f3a6ed3e8fd970c15bf1f0bdf94f34da4a9,Semantic Scholar,,, heap hierarchical policies for web actions using llms,"['Paloma Sodhi', 'S. Branavan', 'Ryan McDonald']",https://arxiv.org/pdf/2310.03720,2023-10-05,,"Large language models (LLMs) have demonstrated remarkable capabilities in performing a range of instruction following tasks in few and zero-shot settings. However, teaching LLMs to perform tasks on the web presents fundamental challenges -- combinatorially large open-world tasks and variations across web interfaces. We tackle these challenges by leveraging LLMs to decompose web tasks into a collection of sub-tasks, each of which can be solved by a low-level, closed-loop policy. These policies constitute a shared grammar across tasks, i.e., new web tasks can be expressed as a composition of these policies. We propose a novel framework, Hierarchical Policies for Web Actions using LLMs (HeaP), that learns a set of hierarchical LLM prompts from demonstrations for planning high-level tasks and executing them via a sequence of low-level policies. We evaluate HeaP against a range of baselines on a suite of web tasks, including MiniWoB++, WebArena, a mock airline CRM, as well as live website interactions, and show that it is able to outperform prior works using orders of magnitude less data.",da0a170656a336f82fa8cf00289d1cc944d9b630,Semantic Scholar,,, check your facts and try again improving large language models with external knowledge and automated feedback,"['Baolin Peng', 'Michel Galley', 'Pengcheng He', 'Hao Cheng', 'Yujia Xie', 'Yu Hu', 'Qiuyuan Huang', 'Lars Lidén', 'Zhou Yu', 'Weizhu Chen', 'Jianfeng Gao']",http://arxiv.org/pdf/2302.12813,2023-02-24,,"Large language models (LLMs), such as ChatGPT, are able to generate human-like, fluent responses for many downstream tasks, e.g., task-oriented dialog and question answering. However, applying LLMs to real-world, mission-critical applications remains challenging mainly due to their tendency to generate hallucinations and their inability to use external knowledge. This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Our system makes the LLM generate responses grounded in external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses. We make the source code and models publicly available.",e5c72b92c48d68594b290c84a8904da7c8335554,Semantic Scholar,,, autoplan automatic planning of interactive decisionmaking tasks with large language models,"['Siqi Ouyang', 'Lei Li']",https://arxiv.org/pdf/2305.15064,2023-05-24,,"Recent large language models (LLMs) are promising for making decisions in grounded environments. However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment. Existing methods require either costly gradient computation or lengthy in-context demonstrations. In this paper, we propose AutoPlan, an approach to guide LLM-based agents to accomplish interactive decision-making tasks. AutoPlan augments the LLM prompt with a task-solving plan and optimizes it through iterative experience collection and reflection. Our experiments show that AutoPlan, though using no in-context demonstrations, achieves success rates on par with the baselines using human-written demonstrations on ALFWorld and even outperforms them by 8% on HotpotQA. The code is available at https://github.com/owaski/AutoPlan.",e814deb54d154aad19ae2b72a2e4dd3376175bb5,Semantic Scholar,,, promptagator fewshot dense retrieval from 8 examples,"['Zhuyun Dai', 'Vincent Zhao', 'Ji Ma', 'Yi Luan', 'Jianmo Ni', 'Jing Lu', 'A. Bakalov', 'Kelvin Guu', 'Keith B. Hall', 'Ming-Wei Chang']",http://arxiv.org/pdf/2209.11755,2022-09-23,,"Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the implicit assumption that it is possible to generalize from one task to all the rest. However, this overlooks the fact that there are many diverse and unique retrieval tasks, each targeting different search intents, queries, and search domains. In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples. To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data. Powered by LLM's generalization ability, Promptagator makes it possible to create task-specific end-to-end retrievers solely based on a few examples {without} using Natural Questions or MS MARCO to train %question generators or dual encoders. Surprisingly, LLM prompting with no more than 8 examples allows dual encoders to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets. Further training standard-size re-rankers using the same generated data yields another 5.0 point nDCG improvement. Our studies determine that query generation can be far more effective than previously observed, especially when a small amount of task-specific knowledge is given.",e86009d9f9b1cdf083a48d087552bc4153784451,Semantic Scholar,,, sgptod building task bots effortlessly via schemaguided llm prompting,"['Xiaoying Zhang', 'Baolin Peng', 'Kun Li', 'Jingyan Zhou', 'Helen M. Meng']",http://arxiv.org/pdf/2305.09067,2023-05-15,,"Building end-to-end task bots and maintaining their integration with new functionalities using minimal human efforts is a long-standing challenge in dialog research. Recently large language models (LLMs) have demonstrated exceptional proficiency in conversational engagement and adherence to instructions across various downstream tasks. In this work, we introduce SGP-TOD, Schema-Guided Prompting for building Task-Oriented Dialog systems effortlessly based on LLMs. Utilizing the symbolic knowledge -- task schema, we instruct fixed LLMs to generate appropriate responses on novel tasks, circumventing the need for training data. Specifically, SGP-TOD comprises three components: a LLM for engaging with users, a DST Prompter to aid the LLM with dialog state tracking, which is then used to retrieve database items, and a Policy Prompter to elicit proper responses adhering to the provided dialog policy. Experimental results on Multiwoz, RADDLE and STAR datasets show that our training-free strategy SGP-TOD, without any task-specific data, yields state-of-the-art (SOTA) zero-shot performance, greatly surpasses the few-shot approaches. In a domain-extension setting, SGP-TOD aptly adapts to new functionalities by merely adding supplementary schema rules. We make our code and data publicly available.",ec56f49bef8925dc8931cc261ab3aca4dd36ad2d,Semantic Scholar,,, prefer prompt ensemble learning via feedbackreflectrefine,"['Chenrui Zhang', 'Lina Liu', 'Jinpeng Wang', 'Chuyuan Wang', 'Xiaodi Sun', 'Hongyu Wang', 'Mingchen Cai']",https://arxiv.org/pdf/2308.12033,2023-08-23,,"As an effective tool for eliciting the power of Large Language Models (LLMs), prompting has recently demonstrated unprecedented abilities across a variety of complex tasks. To further improve the performance, prompt ensemble has attracted substantial interest for tackling the hallucination and instability of LLMs. However, existing methods usually adopt a two-stage paradigm, which requires a pre-prepared set of prompts with substantial manual effort, and is unable to perform directed optimization for different weak learners. In this paper, we propose a simple, universal, and automatic method named PREFER (Pompt Ensemble learning via Feedback-Reflect-Refine) to address the stated limitations. Specifically, given the fact that weak learners are supposed to focus on hard examples during boosting, PREFER builds a feedback mechanism for reflecting on the inadequacies of existing weak learners. Based on this, the LLM is required to automatically synthesize new prompts for iterative refinement. Moreover, to enhance stability of the prompt effect evaluation, we propose a novel prompt bagging method involving forward and backward thinking, which is superior to majority voting and is beneficial for both feedback and weight calculation in boosting. Extensive experiments demonstrate that our PREFER achieves state-of-the-art performance in multiple types of tasks by a significant margin. We have made our code publicly available.",f53a4f34757d1f237446b4d887d5323f2a17ed02,Semantic Scholar,,, empowering private tutoring by chaining large language models,"['Yulin Chen', 'Ning Ding', 'Hai-Tao Zheng', 'Zhiyuan Liu', 'Maosong Sun', 'Bowen Zhou']",https://arxiv.org/pdf/2309.08112,2023-09-15,,"Artificial intelligence has been applied in various aspects of online education to facilitate teaching and learning. However, few approaches has been made toward a complete AI-powered tutoring system. In this work, we explore the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs), covering automatic course planning and adjusting, tailored instruction, and flexible quiz evaluation. To make the system robust to prolonged interaction and cater to individualized education, the system is decomposed into three inter-connected core processes-interaction, reflection, and reaction. Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules. Tools are LLMs prompted to execute one specific task at a time, while memories are data storage that gets updated during education process. Statistical results from learning logs demonstrate the effectiveness and mechanism of each tool usage. Subjective feedback from human users reveal the usability of each function, and comparison with ablation systems further testify the benefits of the designed processes in long-term interaction.",f7842099bbde74dc5aec70bb6af85b88de08ed13,Semantic Scholar,,, promptchainer chaining large language model prompts through visual programming,"['Tongshuang Sherry Wu', 'Ellen Jiang', 'Aaron Donsbach', 'J. Gray', 'A. Molina', 'Michael Terry', 'Carrie J. Cai']",https://arxiv.org/pdf/2203.06566,2022-03-13,,"While LLMs have made it possible to rapidly prototype new ML functionalities, many real-world applications involve complex tasks that cannot be easily handled via a single run of an LLM. Recent work has found that chaining multiple LLM runs together (with the output of one step being the input to the next) can help users accomplish these more complex tasks, and in a way that is perceived to be more transparent and controllable. However, it remains unknown what users need when authoring their own LLM chains – a key step to lowering the barriers for non-AI-experts to prototype AI-infused applications. In this work, we explore the LLM chain authoring process. We find from pilot studies that users need support transforming data between steps of a chain, as well as debugging the chain at multiple granularities. To address these needs, we designed PromptChainer, an interactive interface for visually programming chains. Through case studies with four designers and developers, we show that PromptChainer supports building prototypes for a range of applications, and conclude with open questions on scaling chains to even more complex tasks, as well as supporting low-fi chain prototyping.",0f733817e82026f7c29909a51cb4df7d2685f0e7,Semantic Scholar,,, prompter utilizing large language model prompting for a data efficient embodied instruction following,"['Y. Inoue', 'Hiroki Ohashi']",https://arxiv.org/pdf/2211.03267,2022-11-07,,"Embodied Instruction Following (EIF) studies how mobile manipulator robots should be controlled to accomplish long-horizon tasks specified by natural language instructions. While most research on EIF are conducted in simulators, the ultimate goal of the field is to deploy the agents in real life. As such, it is important to minimize the data cost required for training an agent, to help the transition from sim to real. However, many studies only focus on the performance and overlook the data cost -- modules that require separate training on extra data are often introduced without a consideration on deployability. In this work, we propose FILM++ which extends the existing work FILM with modifications that do not require extra data. While all data-driven modules are kept constant, FILM++ more than doubles FILM's performance. Furthermore, we propose Prompter, which replaces FILM++'s semantic search module with language model prompting. Unlike FILM++'s implementation that requires training on extra sets of data, no training is needed for our prompting based implementation while achieving better or at least comparable performance. Prompter achieves 42.64% and 45.72% on the ALFRED benchmark with high-level instructions only and with step-by-step instructions, respectively, outperforming the previous state of the art by 6.57% and 10.31%.",2d30d800e946d3699d9c41bb95c36a6db63676e7,Semantic Scholar,,, evallm interactive evaluation of large language model prompts on userdefined criteria,"['Tae Soo Kim', 'Yoonjoo Lee', 'Jamin Shin', 'Young-Ho Kim', 'Juho Kim']",https://arxiv.org/pdf/2309.13633,2023-09-24,,"By simply composing prompts, developers can prototype novel generative applications with Large Language Models (LLMs). To refine prototypes into products, however, developers must iteratively revise prompts by evaluating outputs to diagnose weaknesses. Formative interviews (N=8) revealed that developers invest significant effort in manually evaluating outputs as they assess context-specific and subjective criteria. We present EvalLM, an interactive system for iteratively refining prompts by evaluating multiple outputs on user-defined criteria. By describing criteria in natural language, users can employ the system's LLM-based evaluator to get an overview of where prompts excel or fail, and improve these based on the evaluator's feedback. A comparative study (N=12) showed that EvalLM, when compared to manual evaluation, helped participants compose more diverse criteria, examine twice as many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond prompts, our work can be extended to augment model evaluation and alignment in specific application contexts.",a0d83f9e15e722f23c14eb83cb2f87c1d1ea6400,Semantic Scholar,,, flatnessaware prompt selection improves accuracy and sample efficiency,"['Lingfeng Shen', 'Weiting Tan', 'Boyuan Zheng', 'Daniel Khashabi']",http://arxiv.org/pdf/2305.10713,2023-05-18,,"With growing capabilities of large language models, prompting them has become the dominant way to access them. This has motivated the development of strategies for automatically selecting effective language prompts. In this paper, we introduce prompt flatness, a new metric to quantify the expected utility of a language prompt. This metric is inspired by flatness regularization in statistical learning that quantifies the robustness of the model towards its parameter perturbations. We provide theoretical foundations for this metric and its relationship with other prompt selection metrics, providing a comprehensive understanding of existing methods. Empirically, we show that combining prompt flatness with existing metrics improves both performance and sample efficiency. Our metric outperforms the previous prompt selection metrics with an average increase of 5% in accuracy and 10% in Pearson correlation across 6 classification benchmarks.",b8ba16a107621f760e7830ddaab8c3d5c5ff06b0,Semantic Scholar,,, ai chains transparent and controllable humanai interaction by chaining large language model prompts,"['Tongshuang Sherry Wu', 'Michael Terry', 'Carrie J. Cai']",https://dl.acm.org/doi/pdf/10.1145/3491102.3517582,2021-10-04,,"Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by “unit-testing” sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.",d3640eb3b542eaf36fee2261f037a6bf0d8eac9c,Semantic Scholar,,, terminologyaware translation with constrained decoding and large language model prompting,"['Nikolay Bogoychev', 'Pinzhen Chen']",https://arxiv.org/pdf/2310.05824,2023-10-09,,"Terminology correctness is important in the downstream application of machine translation, and a prevalent way to ensure this is to inject terminology constraints into a translation system. In our submission to the WMT 2023 terminology translation task, we adopt a translate-then-refine approach which can be domain-independent and requires minimal manual efforts. We annotate random source words with pseudo-terminology translations obtained from word alignment to first train a terminology-aware model. Further, we explore two post-processing methods. First, we use an alignment process to discover whether a terminology constraint has been violated, and if so, we re-decode with the violating word negatively constrained. Alternatively, we leverage a large language model to refine a hypothesis by providing it with terminology constraints. Results show that our terminology-aware model learns to incorporate terminologies effectively, and the large language model refinement process can further improve terminology recall.",e90d30148ecf633db3bbabdcfa3a0ec06236e0d1,Semantic Scholar,,, a prefrontal cortexinspired architecture for planning in large language models,"['Taylor Webb', 'S. S. Mondal', 'Chi Wang', 'Brian Krabach', 'Ida Momennejad']",https://arxiv.org/pdf/2310.00194,2023-09-30,,"Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require multi-step reasoning or goal-directed planning. To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC). These modules perform functions such as conflict monitoring, state prediction, state evaluation, task decomposition, and task coordination. We find that LLMs are sometimes capable of carrying out these functions in isolation, but struggle to autonomously coordinate them in the service of a goal. Therefore, we propose a black box architecture with multiple LLM-based (GPT-4) modules. The architecture improves planning through the interaction of specialized PFC-inspired modules that break down a larger problem into multiple brief automated calls to the LLM. We evaluate the combined architecture on two challenging planning tasks -- graph traversal and Tower of Hanoi -- finding that it yields significant improvements over standard LLM methods (e.g., zero-shot prompting or in-context learning). These results demonstrate the benefit of utilizing knowledge from cognitive neuroscience to improve planning in LLMs.",31d8bdef7b81e107bf04f226d877fd5aa2f51d34,Semantic Scholar,,, large language models are stateoftheart evaluators of translation quality,"['Tom Kocmi', 'C. Federmann']",http://arxiv.org/pdf/2302.14520,2023-02-28,,"We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without. In our evaluation, we focus on zero-shot prompting, comparing four prompt variants in two modes, based on the availability of the reference. We investigate seven versions of GPT models, including ChatGPT. We show that our method for translation quality assessment only works with GPT 3.5 and larger models. Comparing to results from WMT22’s Metrics shared task, our method achieves state-of-the-art accuracy in both modes when compared to MQM-based human labels. Our results are valid on the system level for all three WMT22 Metrics shared task language pairs, namely English into German, English into Russian, and Chinese into English. This provides a first glimpse into the usefulness of pre-trained, generative large language models for quality assessment of translations. We publicly release all our code and prompt templates used for the experiments described in this work, as well as all corresponding scoring results, to allow for external validation and reproducibility.",4161ad2d2495d8af1d62dc5e71882bde642cd1c1,Semantic Scholar,,, a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models,"['J. Allingham', 'Jie Ren', 'Michael W. Dusenberry', 'J. Liu', 'Xiuye Gu', 'Yin Cui', 'Dustin Tran', 'Balaji Lakshminarayanan']",https://arxiv.org/pdf/2302.06235,2023-02-13,,"Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot classifiers need prompt engineering to achieve high accuracy. Prompt engineering typically requires hand-crafting a set of prompts for individual downstream tasks. In this work, we aim to automate this prompt engineering and improve zero-shot accuracy through prompt ensembling. In particular, we ask""Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?"". We demonstrate that this is possible. In doing so, we identify several pathologies in a naive prompt scoring method where the score can be easily overconfident due to biases in pre-training and test data, and we propose a novel prompt scoring method that corrects for the biases. Using our proposed scoring method to create a weighted average prompt ensemble, our method outperforms equal average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its variants, and 11 fine-grained classification benchmarks, all while being fully automatic, optimization-free, and not requiring access to labeled validation data.",877e27a1d89095fcf686ab675f62a8432d3285ee,Semantic Scholar,,, controlling personality style in dialogue with zeroshot promptbased learning,"['Angela Ramirez', 'Mamon Alsalihy', 'Kartik Aggarwal', 'Cecilia Li', 'Liren Wu', 'M. Walker']",http://arxiv.org/pdf/2302.03848,2023-02-08,,"Prompt-based or in-context learning has achieved high zero-shot performance on many natural language generation (NLG) tasks. Here we explore the performance of prompt-based learning for simultaneously controlling the personality and the semantic accuracy of an NLG for task-oriented dialogue. We experiment with prompt-based learning on the PERSONAGE restaurant recommendation corpus to generate semantically and stylistically-controlled text for 5 different Big-5 personality types: agreeable, disagreeable, conscientious, unconscientious, and extravert. We test two different classes of discrete prompts to generate utterances for a particular personality style: (1) prompts that demonstrate generating directly from a meaning representation that includes a personality specification; and (2) prompts that rely on first converting the meaning representation to a textual pseudo-reference, and then using the pseudo-reference in a textual style transfer (TST) prompt. In each case, we show that we can vastly improve performance by over-generating outputs and ranking them, testing several ranking functions based on automatic metrics for semantic accuracy, personality-match, and fluency. We also test whether NLG personality demonstrations from the restaurant domain can be used with meaning representations for the video game domain to generate personality stylized utterances about video games. Our findings show that the TST prompts produces the highest semantic accuracy (78.46% for restaurants and 87.6% for video games) and personality accuracy (100% for restaurants and 97% for video games). Our results on transferring personality style to video game utterances are surprisingly good. To our knowledge, there is no previous work testing the application of prompt-based learning to simultaneously controlling both style and semantic accuracy in NLG.",9c39e942b87cbada41a4a52364f996915c7c2d98,Semantic Scholar,,, steps a benchmark for order reasoning in sequential tasks,"['Weizhi Wang', 'Hong Wang', 'Xi Yan']",http://arxiv.org/pdf/2306.04441,2023-06-07,,"Various human activities can be abstracted into a sequence of actions in natural text, i.e. cooking, repairing, manufacturing, etc. Such action sequences heavily depend on the executing order, while disorder in action sequences leads to failure of further task execution by robots or AI agents. Therefore, to verify the order reasoning capability of current neural models in sequential tasks, we propose a challenging benchmark , named STEPS. STEPS involves two subtask settings, focusing on determining the rationality of given next step in recipes and selecting the reasonable step from the multi-choice question, respectively. We describe the data construction and task formulations, and benchmark most of significant Large Language Models (LLMs). The experimental results demonstrate 1) The commonsense reasoning of action orders in sequential tasks are challenging to resolve via zero-shot prompting or few-shot in-context learning for LLMs; 2) Prompting method still significantly lags behind tuning-based method on STEPS.",a8a71f9b10b281e796fdc2ee7aaec40067739574,Semantic Scholar,,, prompting large language model for machine translation a case study,"['Biao Zhang', 'B. Haddow', 'Alexandra Birch']",http://arxiv.org/pdf/2301.07069,2023-01-17,,"Research on prompting has shown excellent performance with little or even no supervised training across many tasks. However, prompting for machine translation is still under-explored in the literature. We fill this gap by offering a systematic study on prompting strategies for translation, examining various factors for prompt template and demonstration example selection. We further explore the use of monolingual data and the feasibility of cross-lingual, cross-domain, and sentence-to-document transfer learning in prompting. Extensive experiments with GLM-130B (Zeng et al., 2022) as the testbed show that 1) the number and the quality of prompt examples matter, where using suboptimal examples degenerates translation; 2) several features of prompt examples, such as semantic similarity, show significant Spearman correlation with their prompting performance; yet, none of the correlations are strong enough; 3) using pseudo parallel prompt examples constructed from monolingual data via zero-shot prompting could improve translation; and 4) improved performance is achievable by transferring knowledge from prompt examples selected in other settings. We finally provide an analysis on the model outputs and discuss several problems that prompting still suffers from.",c879413103f8950bdd414c7f60a39bd7748c9be8,Semantic Scholar,,, a practical survey on zeroshot prompt design for incontext learning,['Yinheng Li'],https://doi.org/10.26615/978-954-452-092-2_069,2023-09-22,,"The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single “best” prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks.",cd7d770eabb4dab6894d9f91d2c3bc337e94a4e1,Semantic Scholar,,, developing a scalable benchmark for assessing large language models in knowledge graph engineering,"['Lars Meyer', 'Johannes Frey', 'K. Junghanns', 'Felix Brei', 'Kirill Bulert', 'Sabine Grunder-Fahrer', 'Michael Martin']",https://arxiv.org/pdf/2308.16622,2023-08-31,,"As the field of Large Language Models (LLMs) evolves at an accelerated pace, the critical need to assess and monitor their performance emerges. We introduce a benchmarking framework focused on knowledge graph engineering (KGE) accompanied by three challenges addressing syntax and error correction, facts extraction and dataset generation. We show that while being a useful tool, LLMs are yet unfit to assist in knowledge graph generation with zero-shot prompting. Consequently, our LLM-KG-Bench framework provides automatic evaluation and storage of LLM responses as well as statistical data and visualization tools to support tracking of prompt engineering and model performance.",d0e3af5f20a451c04770929979d7a8406a1a2466,Semantic Scholar,,, how far are large language models from agents with theoryofmind,"['Pei Zhou', 'Aman Madaan', 'Srividya Pranavi Potharaju', 'Aditya Gupta', 'Kevin R. McKee', 'Ari Holtzman', 'J. Pujara', 'Xiang Ren', 'Swaroop Mishra', 'Aida Nematzadeh', 'Shyam Upadhyay', 'Manaal Faruqui']",https://arxiv.org/pdf/2310.03051,2023-10-04,,"""Thinking is for Doing.""Humans can infer other people's mental states from observations--an ability called Theory-of-Mind (ToM)--and subsequently act pragmatically on those inferences. Existing question answering benchmarks such as ToMi ask models questions to make inferences about beliefs of characters in a story, but do not test whether models can then use these inferences to guide their actions. We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios. Experiments on T4D demonstrate that LLMs such as GPT-4 and PaLM 2 seemingly excel at tracking characters' beliefs in stories, but they struggle to translate this capability into strategic action. Our analysis reveals the core challenge for LLMs lies in identifying the implicit inferences about mental states without being explicitly asked about as in ToMi, that lead to choosing the correct action in T4D. To bridge this gap, we introduce a zero-shot prompting framework, Foresee and Reflect (FaR), which provides a reasoning structure that encourages LLMs to anticipate future challenges and reason about potential actions. FaR boosts GPT-4's performance from 50% to 71% on T4D, outperforming other prompting methods such as Chain-of-Thought and Self-Ask. Moreover, FaR generalizes to diverse out-of-distribution story structures and scenarios that also require ToM inferences to choose an action, consistently outperforming other methods including few-shot in-context learning.",ed40889e11e812ef33578506844be06d713f6092,Semantic Scholar,,, selficl zeroshot incontext learning with selfgenerated demonstrations,"['Wei-Lin Chen', 'Cheng-Kuang Wu', 'Hsin-Hsi Chen']",http://arxiv.org/pdf/2305.15035,2023-05-24,,"Large language models (LLMs) have exhibited striking in-context learning (ICL) ability to adapt to target tasks with a few input-output demonstrations. For better ICL, different methods are proposed to select representative demonstrations from existing training corpora. However, such settings are not aligned with real-world practices, as end-users usually query LMs without access to demonstration pools. In this work, we introduce Self-ICL -- a simple framework which bootstraps LMs' intrinsic capabilities to perform zero-shot ICL. Given a test input, Self-ICL first prompts the model to generate pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we perform ICL for the test input with the pseudo-input-label pairs as demonstrations. Evaluation on 23 BIG-Bench Hard tasks shows Self-ICL outperforms zero-shot baselines on both average accuracy and head-to-head comparison. Moreover, with zero-shot chain-of-thought, Self-ICL achieves results comparable to using real demonstrations. Additionally, we conduct a range of analyses to validate Self-ICL's effectiveness and provide insights for its behaviors under different settings.",fe425e341cf646689e42adead17f14eeac5d03e6,Semantic Scholar,,, prodigy enabling incontext learning over graphs,"['Qian Huang', 'Hongyu Ren', 'Peng Chen', 'Gregor Krvzmanc', 'D. Zeng', 'Percy Liang', 'J. Leskovec']",http://arxiv.org/pdf/2305.12600,2023-05-21,,"In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters. While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse \textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first pretraining framework that enables in-context learning over graphs. The key idea of our framework is to formulate in-context learning over graphs with a novel \emph{prompt graph} representation, which connects prompt examples and queries. We then propose a graph neural network architecture over the prompt graph and a corresponding family of in-context pretraining objectives. With PRODIGY, the pretrained model can directly perform novel downstream classification tasks on unseen graphs via in-context learning. We provide empirical evidence of the effectiveness of our framework by showcasing its strong in-context learning performance on tasks involving citation networks and knowledge graphs. Our approach outperforms the in-context learning accuracy of contrastive pretraining baselines with hard-coded adaptation by 18\% on average across all setups. Moreover, it also outperforms standard finetuning with limited data by 33\% on average with in-context learning.",0088c9f4d50706c7ab71efa13bcb4b42cf2058e2,Semantic Scholar,,, outfox llmgenerated essay detection through incontext learning with adversarially generated examples,"['Ryuto Koike', 'Masahiro Kaneko', 'Naoaki Okazaki']",https://arxiv.org/pdf/2307.11729,2023-07-21,,"Large Language Models (LLMs) have achieved human-level fluency in text generation, making it difficult to distinguish between human-written and LLM-generated texts. This poses a growing risk of misuse of LLMs and demands the development of detectors to identify LLM-generated texts. However, existing detectors lack robustness against attacks: they degrade detection accuracy by simply paraphrasing LLM-generated texts. Furthermore, a malicious user might attempt to deliberately evade the detectors based on detection results, but this has not been assumed in previous studies. In this paper, we propose OUTFOX, a framework that improves the robustness of LLM-generated-text detectors by allowing both the detector and the attacker to consider each other's output. In this framework, the attacker uses the detector's prediction labels as examples for in-context learning and adversarially generates essays that are harder to detect, while the detector uses the adversarially generated essays as examples for in-context learning to learn to detect essays from a strong attacker. Experiments in the domain of student essays show that the proposed detector improves the detection performance on the attacker-generated texts by up to +41.3 points in F1-score. Furthermore, the proposed detector shows a state-of-the-art detection performance: up to 96.9 points in F1-score, beating existing detectors on non-attacked texts. Finally, the proposed attacker drastically degrades the performance of detectors by up to -57.0 points F1-score, massively outperforming the baseline paraphrasing method for evading detection.",0095acc4f2c3255cf38fdf844003c97858adb418,Semantic Scholar,,, naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers,"['Kai Shen', 'Zeqian Ju', 'Xu Tan', 'Yanqing Liu', 'Yichong Leng', 'Lei He', 'Tao Qin', 'Sheng Zhao', 'Jiang Bian']",http://arxiv.org/pdf/2304.09116,2023-04-18,,"Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these tokens one by one, which suffer from unstable prosody, word skipping/repeating issue, and poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual vector quantizers to get the quantized latent vectors and uses a diffusion model to generate these latent vectors conditioned on text input. To enhance the zero-shot capability that is important to achieve diverse speech synthesis, we design a speech prompting mechanism to facilitate in-context learning in the diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS systems by a large margin in terms of prosody/timbre similarity, robustness, and voice quality in a zero-shot setting, and performs novel zero-shot singing synthesis with only a speech prompt. Audio samples are available at https://speechresearch.github.io/naturalspeech2.",00c367427d9135209d84008e6cb5e90f0adba881,Semantic Scholar,,, demonstratesearchpredict composing retrieval and language models for knowledgeintensive nlp,"['O. Khattab', 'Keshav Santhanam', 'Xiang Lisa Li', 'David Leo Wright Hall', 'Percy Liang', 'Christopher Potts', 'M. Zaharia']",http://arxiv.org/pdf/2212.14024,2022-12-28,,"Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM). Existing work has combined these in simple""retrieve-then-read""pipelines in which the RM retrieves passages that are inserted into the LM prompt. To begin to fully realize the potential of frozen LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that relies on passing natural language texts in sophisticated pipelines between an LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware demonstrations, search for relevant passages, and generate grounded predictions, systematically breaking down problems into small transformations that the LM and RM can handle more reliably. We have written novel DSP programs for answering questions in open-domain, multi-hop, and conversational settings, establishing in early evaluations new state-of-the-art in-context learning results and delivering 37-120%, 8-39%, and 80-290% relative gains against the vanilla LM (GPT-3.5), a standard retrieve-then-read pipeline, and a contemporaneous self-ask pipeline, respectively. We release DSP at https://github.com/stanfordnlp/dsp",03532123ccffae8d411264320e8a5ae2b6eddea0,Semantic Scholar,,, incontext analogical reasoning with pretrained language models,"['Xiaoyang Hu', 'Shane Storks', 'Richard L. Lewis', 'J. Chai']",http://arxiv.org/pdf/2305.17626,2023-05-28,,"Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences. While it is thought to be essential for robust reasoning in AI systems, conventional approaches require significant training and/or hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by cognitive science research that has found connections between human language and analogy-making, we explore the use of intuitive language-based abstractions to support analogy in AI systems. Specifically, we apply large pre-trained language models (PLMs) to visual Raven’s Progressive Matrices (RPM), a common relational reasoning test. By simply encoding the perceptual features of the problem into language form, we find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods. We explore different encodings that vary the level of abstraction over task features, finding that higher-level abstractions further strengthen PLMs’ analogical reasoning. Our detailed analysis reveals insights on the role of model complexity, in-context learning, and prior knowledge in solving RPM tasks.",0366177b44ed13d86b9d704a3a82ea3750e5abed,Semantic Scholar,,, promptaugmented linear probing scaling beyond the limit of fewshot incontext learners,"['Hyunsoo Cho', 'Hyuhng Joon Kim', 'Junyeob Kim', 'Sang-Woo Lee', 'Sang-goo Lee', 'Kang Min Yoo', 'Taeuk Kim']",http://arxiv.org/pdf/2212.10873,2022-12-21,,"Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training sample as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly closes the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.",06edda0310b4ec7c5012d012349252a3a77521b6,Semantic Scholar,,, bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games,"['Ruoyao Wang', 'G. Todd', 'Xingdi Yuan', 'Ziang Xiao', 'Marc-Alexandre Côté', 'Peter Alexander Jansen']",http://arxiv.org/pdf/2305.14879,2023-05-24,,"In this work, we investigate the capacity of language models to generate explicit, interpretable, and interactive world models of scientific and common-sense reasoning tasks. We operationalize this as a task of generating text games, expressed as hundreds of lines of Python code. To facilitate this task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We empirically demonstrate that GPT-4 can use these games as templates for single-shot in-context learning, successfully producing runnable games on unseen topics in 28% of cases. When allowed to self-reflect on program errors, game runnability substantially increases to 57%. While evaluating simulation fidelity is labor-intensive, we introduce a suite of automated metrics to assess game fidelity, technical validity, adherence to task specifications, and winnability, showing a high degree of agreement with expert human ratings. We pose this as a challenge task to spur further development at the juncture of world modeling and code generation.",070b91f80ac118b910c1d2ab5be9f65f685979fe,Semantic Scholar,,, exploring diverse incontext configurations for image captioning,"['Xu Yang', 'Yongliang Wu', 'Ming-Hsuan Yang', 'Haokun Chen', 'Xin Geng']",http://arxiv.org/pdf/2305.14800,2023-05-24,,"After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations. Recently, researchers in Vision-Language (VL) domains also develop their few-shot learners, while they only use the simplest way, ie., randomly sampling, to configure in-context image-text pairs. In order to explore the effects of varying configurations on VL in-context learning, we devised four strategies for image selection and four for caption assignment to configure in-context image-text pairs for image captioning. Here Image Captioning is used as the case study since it can be seen as the visually-conditioned LM. Our comprehensive experiments yield two counter-intuitive but valuable insights, highlighting the distinct characteristics of VL in-context learning due to multi-modal synergy, as compared to the NLP case. Furthermore, in our exploration of optimal combination strategies, we observed an average performance enhancement of 20.9 of CIDEr scores compared to the baseline. The code is given in https://github.com/yongliang-wu/ExploreCfg.",0744783bbefc12b2b1383bed137e8a80061274b7,Semantic Scholar,,, neural machine translation models can learn to be fewshot learners,"['Raphael Reinauer', 'P. Simianer', 'Kaden Uhlig', 'Johannes E. M. Mosig', 'Joern Wuebker']",https://arxiv.org/pdf/2309.08590,2023-09-15,,"The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.",09a85806442373f167e45eaf662a7914df048b10,Semantic Scholar,,, good examples make a faster learner simple demonstrationbased learning for lowresource ner,"['Dong-Ho Lee', 'Mahak Agarwal', 'Akshen Kadakia', 'Takashi Shibuya', 'J. Pujara', 'Xiang Ren']",https://aclanthology.org/2022.acl-long.192.pdf,2021-10-16,,"Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates.Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence. However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence. Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning. We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use. Results on in-domain learning and domain adaptation show that the model’s performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., a 4-17% improvement on 25 train instances). We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.",0a2ac054c533314c0659f3b139388527df0d42f3,Semantic Scholar,,, prompting language models for linguistic structure,"['Terra Blevins', 'Hila Gonen', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2211.07830,2022-11-15,,"Although pretrained language models (PLMs) can be prompted to perform a wide range of language tasks, it remains an open question how much this ability comes from generalizable linguistic understanding versus surface-level lexical patterns. To test this, we present a structured prompting approach for linguistic structured prediction tasks, allowing us to perform zero- and few-shot sequence tagging with autoregressive PLMs. We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking, demonstrating strong few-shot performance in all cases. We also find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels. These findings indicate that the in-context learning ability and linguistic knowledge of PLMs generalizes beyond memorization of their training data.",0a67a5e3f4125445ed84f2db3c92429010aad68a,Semantic Scholar,,, improving the reliability of large language models by leveraging uncertaintyaware incontext learning,"['Yuchen Yang', 'Houqiang Li', 'Yanfeng Wang', 'Yu Wang']",https://arxiv.org/pdf/2310.04782,2023-10-07,,"In recent years, large-scale language models (LLMs) have gained attention for their impressive text generation capabilities. However, these models often face the challenge of""hallucination,""which undermines their reliability. In this study, we introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty. Human-defined methods for estimating uncertainty typically assume that""uncertainty is lower when the model's response is correct compared to when it is incorrect.""However, setting a precise threshold to distinguish correctness is challenging. Therefore, we introduce uncertainty information as an intermediary variable that implicitly influences the model's behavior. Our innovative uncertainty-aware in-context learning framework involves fine-tuning the LLM using a calibration dataset. Our aim is to improve the model's responses by filtering out answers with high uncertainty while considering the model's knowledge limitations. We evaluate the model's knowledge by examining multiple responses to the same question for the presence of a correct answer. When the model lacks relevant knowledge, the response should indicate that the question cannot be answered. Conversely, when the model has relevant knowledge, the response should provide the correct answer. Extensive experiments confirm the effectiveness of our framework, leading to two key findings. First, the logit output values of the LLM partly reflect inherent uncertainty. Second, our model autonomously recognizes uncertainty, resulting in improved responses.",0aa5940fda7c994675d08c41eca2a6909eb6d205,Semantic Scholar,,, how do incontext examples affect compositional generalization,"['Shengnan An', 'Zeqi Lin', 'Qiang Fu', 'B. Chen', 'Nanning Zheng', 'Jian-Guang Lou', 'D. Zhang']",http://arxiv.org/pdf/2305.04835,2023-05-08,,"Compositional generalization–understanding unseen combinations of seen primitives–is an essential reasoning capability in human intelligence.The AI community mainly studies this capability by fine-tuning neural networks on lots of training samples, while it is still unclear whether and how in-context learning–the prevailing few-shot paradigm based on large language models–exhibits compositional generalization.In this paper, we present CoFe, a test suite to investigate in-context compositional generalization.We find that the compositional generalization performance can be easily affected by the selection of in-context examples, thus raising the research question what the key factors are to make good in-context examples for compositional generalization.We study three potential factors: similarity, diversity and complexity. Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple.Furthermore, two strong limitations are observed: in-context compositional generalization on fictional words is much weaker than that on commonly used ones; it is still critical that the in-context examples should cover required linguistic structures, even though the backbone model has been pre-trained on large corpus.We hope our analysis would facilitate the understanding and utilization of in-context learning paradigm.",0ae12d63f77f40b430f17c791a5191ff5fee5086,Semantic Scholar,,, chatrec towards interactive and explainable llmsaugmented recommender system,"['Yunfan Gao', 'Tao Sheng', 'Youlin Xiang', 'Yun Xiong', 'Haofen Wang', 'Jiawei Zhang']",http://arxiv.org/pdf/2303.14524,2023-03-25,,"Large language models (LLMs) have demonstrated their significant potential to be applied for addressing various application tasks. However, traditional recommender systems continue to face great challenges such as poor interactivity and explainability, which actually also hinder their broad deployment in real-world systems. To address these limitations, this paper proposes a novel paradigm called Chat-Rec (ChatGPT Augmented Recommender System) that innovatively augments LLMs for building conversational recommender systems by converting user profiles and historical interactions into prompts. Chat-Rec is demonstrated to be effective in learning user preferences and establishing connections between users and products through in-context learning, which also makes the recommendation process more interactive and explainable. What's more, within the Chat-Rec framework, user's preferences can transfer to different products for cross-domain recommendations, and prompt-based injection of information into LLMs can also handle the cold-start scenarios with new items. In our experiments, Chat-Rec effectively improve the results of top-k recommendations and performs better in zero-shot rating prediction task. Chat-Rec offers a novel approach to improving recommender systems and presents new practical scenarios for the implementation of AIGC (AI generated content) in recommender system studies.",0cfdd655100055f234fd23ebecd915504b8e00e3,Semantic Scholar,,, maple multimodal prompt learning,"['Muhammad Uzair Khattak', 'H. Rasheed', 'Muhammad Maaz', 'Salman Khan', 'F. Khan']",https://arxiv.org/pdf/2210.03117,2022-10-06,,"Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.",0d0dbfb1b315a43216020abaf74d289456198219,Semantic Scholar,,, a theory of emergent incontext learning as implicit structure induction,"['Michael Hahn', 'Navin Goyal']",http://arxiv.org/pdf/2303.07971,2023-03-14,,"Scaling large language models (LLMs) leads to an emergent capacity to learn in-context from example demonstrations. Despite progress, theoretical understanding of this phenomenon remains limited. We argue that in-context learning relies on recombination of compositional operations found in natural language data. We derive an information-theoretic bound showing how in-context learning abilities arise from generic next-token prediction when the pretraining distribution has sufficient amounts of compositional structure, under linguistically motivated assumptions. A second bound provides a theoretical justification for the empirical success of prompting LLMs to output intermediate steps towards an answer. To validate theoretical predictions, we introduce a controlled setup for inducing in-context learning; unlike previous approaches, it accounts for the compositional nature of language. Trained transformers can perform in-context learning for a range of tasks, in a manner consistent with the theoretical results. Mirroring real-world LLMs in a miniature setup, in-context learning emerges when scaling parameters and data, and models perform better when prompted to output intermediate steps. Probing shows that in-context learning is supported by a representation of the input's compositional structure. Taken together, these results provide a step towards theoretical understanding of emergent behavior in large language models.",0ea7fc93d4947d9024ccaa202987a2070683bc1f,Semantic Scholar,,, are humangenerated demonstrations necessary for incontext learning,"['Rui Li', 'Guoyin Wang', 'Jiwei Li']",https://arxiv.org/pdf/2309.14681,2023-09-26,,"Despite the promising few-shot ability of large language models (LLMs), the standard paradigm of In-context Learning (ICL) suffers the disadvantages of susceptibility to selected demonstrations and the intricacy to generate these demonstrations. In this paper, we raise the fundamental question that whether human-generated demonstrations are necessary for ICL. To answer this question, we propose self-contemplation prompting strategy (SEC), a paradigm free from human-crafted demonstrations. The key point of SEC is that, instead of using hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create demonstrations on their own, based on which the final output is generated. SEC is a flexible framework and can be adapted to both the vanilla ICL and the chain-of-thought (CoT), but with greater ease: as the manual-generation process of both examples and rationale can be saved. Extensive experiments in arithmetic reasoning, commonsense reasoning, multi-task language understanding, and code generation benchmarks, show that SEC, which does not require hand-crafted demonstrations, significantly outperforms the zero-shot learning strategy, and achieves comparable results to ICL with hand-crafted demonstrations. This demonstrates that, for many tasks, contemporary LLMs possess a sufficient level of competence to exclusively depend on their own capacity for decision making, removing the need for external training data. Code is available at https://github.com/ruili33/SEC.",0f45608ddc01b3e192f3490330f4c4b8de074f79,Semantic Scholar,,, honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model,"['Jacob Eisenstein', 'D. Andor', 'Bernd Bohnet', 'Michael Collins', 'David M. Mimno']",http://arxiv.org/pdf/2210.02498,2022-10-05,,"Explainable question answering systems should produce not only accurate answers but also rationales that justify their reasoning and allow humans to check their work. But what sorts of rationales are useful and how can we train systems to produce them? We propose a new style of rationale for open-book question answering, called \emph{markup-and-mask}, which combines aspects of extractive and free-text explanations. In the markup phase, the passage is augmented with free-text markup that enables each sentence to stand on its own outside the discourse context. In the masking phase, a sub-span of the marked-up passage is selected. To train a system to produce markup-and-mask rationales without annotations, we leverage in-context learning. Specifically, we generate silver annotated data by sending a series of prompts to a frozen pretrained language model, which acts as a teacher. We then fine-tune a smaller student model by training on the subset of rationales that led to correct answers. The student is""honest""in the sense that it is a pipeline: the rationale acts as a bottleneck between the passage and the answer, while the""untrusted""teacher operates under no such constraints. Thus, we offer a new way to build trustworthy pipeline systems from a combination of end-task annotations and frozen pretrained language models.",0f4ab3fe492ececbfd38be9682047371e2e9b8c6,Semantic Scholar,,, collaborating with language models for embodied reasoning,"['Ishita Dasgupta', 'Christine Kaeser-Chen', 'Kenneth Marino', 'Arun Ahuja', 'Sheila Babayan', 'Felix Hill', 'R. Fergus']",http://arxiv.org/pdf/2302.00763,2023-02-01,,"Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other hand, Large Scale Language Models (LSLMs) have exhibited strong reasoning ability and the ability to to adapt to new tasks through in-context learning. However, LSLMs do not inherently have the ability to interrogate or intervene on the environment. In this work, we investigate how to combine these complementary abilities in a single system consisting of three parts: a Planner, an Actor, and a Reporter. The Planner is a pre-trained language model that can issue commands to a simple embodied agent (the Actor), while the Reporter communicates with the Planner to inform its next command. We present a set of tasks that require reasoning, test this system's ability to generalize zero-shot and investigate failure cases, and demonstrate how components of this system can be trained with reinforcement-learning to improve performance.",102e4c860e39a2bfd7bf3f03b9ad69aac7bf3b5f,Semantic Scholar,,, knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering,"['Keheng Wang', 'Feiyu Duan', 'Sirui Wang', 'Peiguang Li', 'Yunsen Xian', 'Chuantao Yin', 'Wenge Rong', 'Zhang Xiong']",https://arxiv.org/pdf/2308.13259,2023-08-25,,"Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown impressive reasoning ability in various downstream tasks. Even so, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful intermediate reasoning steps, especially in the context of answering knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. Concretely, we formulate the CoT rationale process of LLMs into a structured multi-round QA format. In each round, LLMs interact with a QA system that retrieves external knowledge and produce faithful reasoning traces based on retrieved precise answers. The structured CoT reasoning of LLMs is facilitated by our developed KBQA CoT collection, which serves as in-context learning demonstrations and can also be utilized as feedback augmentation to train a robust retriever. Extensive experiments on WebQSP and ComplexWebQuestion datasets demonstrate the effectiveness of proposed KD-CoT in task-solving reasoning generation, which outperforms the vanilla CoT ICL with an absolute success rate of 8.0% and 5.1%. Furthermore, our proposed feedback-augmented retriever outperforms the state-of-the-art baselines for retrieving knowledge, achieving significant improvement in Hit and recall performance. Our code and data are released on https://github.com/AdelWang/KD-CoT/tree/main.",10955e63aa49fab146267949f8ebc9ebe8275183,Semantic Scholar,,, taken out of context on measuring situational awareness in llms,"['Lukas Berglund', 'Asa Cooper Stickland', 'Mikita Balesni', 'Max Kaufmann', 'Meg Tong', 'Tomasz Korbak', 'Daniel Kokotajlo', 'Owain Evans']",https://arxiv.org/pdf/2309.00667,2023-09-01,,"We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.",135ae2ea7a2c966815e85a232469a0a14b4d8d67,Semantic Scholar,,, larger language models do incontext learning differently,"['Jerry W. Wei', 'Jason Wei', 'Yi Tay', 'Dustin Tran', 'Albert Webson', 'Yifeng Lu', 'Xinyun Chen', 'Hanxiao Liu', 'Da Huang', 'Denny Zhou', 'Tengyu Ma']",http://arxiv.org/pdf/2303.03846,2023-03-07,,"We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments on ICL with flipped labels show that overriding semantic priors is an emergent ability of model scale. While small language models ignore flipped labels presented in-context and thus rely primarily on semantic priors from pretraining, large models can override semantic priors when presented with in-context exemplars that contradict priors, despite the stronger semantic priors that larger models may hold. We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e.g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task. The ability to do SUL-ICL also emerges primarily with scale, and large-enough language models can even perform linear classification in a SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that instruction tuning strengthens both the use of semantic priors and the capacity to learn input-label mappings, but more of the former.",154493f69d7db3d49da0e51df0192c6ad5f1724a,Semantic Scholar,,, incontext learning user simulators for taskoriented dialog systems,"['Silvia Terragni', 'Modestas Filipavicius', 'Nghia Khau', 'Bruna Guedes', ""Andr'e Manso"", 'Roland Mathis']",http://arxiv.org/pdf/2306.00774,2023-06-01,,"This paper presents a novel application of large language models in user simulation for task-oriented dialog systems, specifically focusing on an in-context learning approach. By harnessing the power of these models, the proposed approach generates diverse utterances based on user goals and limited dialog examples. Unlike traditional simulators, this method eliminates the need for labor-intensive rule definition or extensive annotated data, making it more efficient and accessible. Additionally, an error analysis of the interaction between the user simulator and dialog system uncovers common mistakes, providing valuable insights into areas that require improvement. Our implementation is available at https://github.com/telepathylabsai/prompt-based-user-simulator.",15fcd80193d1c446bc3d37fcc30f5475b9ebd5b0,Semantic Scholar,,, cognitive reframing of negative thoughts through humanlanguage model interaction,"['Ashish Sharma', 'Kevin Rushton', 'Inna Wanyin Lin', 'David Wadden', 'Khendra G. Lucas', 'Adam S. Miner', 'Theresa Nguyen', 'Tim Althoff']",http://arxiv.org/pdf/2305.02466,2023-05-04,,"A proven therapeutic technique to overcome negative thoughts is to replace them with a more hopeful “reframed thought.” Although therapy can help people practice and learn this Cognitive Reframing of Negative Thoughts, clinician shortages and mental health stigma commonly limit people’s access to therapy. In this paper, we conduct a human-centered study of how language models may assist people in reframing negative thoughts. Based on psychology literature, we define a framework of seven linguistic attributes that can be used to reframe a thought. We develop automated metrics to measure these attributes and validate them with expert judgements from mental health practitioners. We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model that effectively generates reframed thoughts and controls their linguistic attributes. To investigate what constitutes a “high-quality” reframe, we conduct an IRB-approved randomized field study on a large mental health website with over 2,000 participants. Amongst other findings, we show that people prefer highly empathic or specific reframes, as opposed to reframes that are overly positive. Our findings provide key implications for the use of LMs to assist people in overcoming negative thoughts.",16aacf48048ac128a07fe2c0761439e1d7211492,Semantic Scholar,,, dricl demonstrationretrieved incontext learning,"['Man Luo', 'Xin Xu', 'Zhuyun Dai', 'Panupong Pasupat', 'Mehran Kazemi', 'Chitta Baral', 'Vaiva Imbrasaite', 'Vincent Zhao']",http://arxiv.org/pdf/2305.14128,2023-05-23,,"In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches by demonstrating that even simple word-overlap similarity measures such as BM25 outperform randomly selected demonstrations. Furthermore, we extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that although a model has already seen the training data at training time, retrieving demonstrations from the training data at test time yields better results compared to using no demonstrations or random demonstrations. Last but not least, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers.",18143a4c2da37444e06feed04cc9efeb0856352d,Semantic Scholar,,, sociocultural norm similarities and differences via situational alignment and explainable textual entailment,"['Sky Ch-Wang', 'Arkadiy Saakyan', 'Oliver Li', 'Zhou Yu', 'S. Muresan']",http://arxiv.org/pdf/2305.14492,2023-05-23,,"Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descriptive social norms across Chinese and American cultures. We demonstrate our approach by leveraging discussions on a Chinese Q&A platform (Zhihu) and the existing SocialChemistry dataset as proxies for contrasting cultural axes, align social situations cross-culturally, and extract social norms from texts using in-context learning. Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3,069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment, showing that existing models under 3B parameters have significant room for improvement in both automatic and human evaluation. Further analysis of cross-cultural norm differences based on our dataset shows empirical alignment with the social orientations framework, revealing several situational and descriptive nuances in norms across these cultures.",18bd959aaa8a83b5b2192282224d700da7459857,Semantic Scholar,,, flirt feedback loop incontext red teaming,"['Ninareh Mehrabi', 'Palash Goyal', 'Christophe Dupuy', 'Qian Hu', 'Shalini Ghosh', 'R. Zemel', 'Kai-Wei Chang', 'A. Galstyan', 'Rahul Gupta']",https://arxiv.org/pdf/2308.04265,2023-08-08,,"Warning: this paper contains content that may be inappropriate or offensive. As generative models become available for public use in various applications, testing and analyzing vulnerabilities of these models has become a priority. Here we propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation. Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation. We propose different in-context attack strategies to automatically learn effective and diverse adversarial prompts for text-to-image models. Our experiments demonstrate that compared to baseline approaches, our proposed strategy is significantly more effective in exposing vulnerabilities in Stable Diffusion (SD) model, even when the latter is enhanced with safety features. Furthermore, we demonstrate that the proposed framework is effective for red teaming text-to-text models, resulting in significantly higher toxic response generation rate compared to previously reported numbers.",19443d48399d4fe89a4b0a96917c50c6fd9c5af1,Semantic Scholar,,, extractive summarization via chatgpt for faithful summary generation,"['Haopeng Zhang', 'Xiao Liu', 'Jiawei Zhang']",https://arxiv.org/pdf/2304.04193,2023-04-09,,"Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT's performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches.",1a01c982aa20c1a1ad1ad94866e3197da99a52a2,Semantic Scholar,,, "revisiting outofdistribution robustness in nlp benchmark, analysis, and llms evaluations","['Lifan Yuan', 'Yangyi Chen', 'Ganqu Cui', 'Hongcheng Gao', 'Fangyuan Zou', 'Xingyi Cheng', 'Heng Ji', 'Zhiyuan Liu', 'Maosong Sun']",http://arxiv.org/pdf/2306.04618,2023-06-07,,"This paper reexamines the research on out-of-distribution (OOD) robustness in the field of NLP. We find that the distribution shift settings in previous studies commonly lack adequate challenges, hindering the accurate evaluation of OOD robustness. To address these issues, we propose a benchmark construction protocol that ensures clear differentiation and challenging distribution shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we conduct a series of experiments on pre-trained language models for analysis and evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the relationship between in-distribution (ID) and OOD performance. We identify three typical types that unveil the inner learning mechanism, which could potentially facilitate the forecasting of OOD robustness, correlating with the advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and find that, despite exhibiting some effectiveness in specific cases, they do not offer significant improvement compared to vanilla fine-tuning. Further, we evaluate 5 LLMs with various adaptation paradigms and find that when sufficient ID data is available, fine-tuning domain-specific models outperform LLMs on ID examples significantly. However, in the case of OOD instances, prioritizing LLMs with in-context learning yields better results. We identify that both fine-tuned small models and LLMs face challenges in effectively addressing downstream tasks. The code is public at \url{https://github.com/lifan-yuan/OOD_NLP}.",1a55d16c14587edda62dc9c9ff09e0b531dd169c,Semantic Scholar,,, discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators,"['Giwon Hong', 'Jeonghwan Kim', 'Junmo Kang', 'Sung-Hyon Myaeng', 'Joyce Jiyoung Whang']",http://arxiv.org/pdf/2305.01579,2023-05-02,,"Most existing retrieval-augmented language models (LMs) for question answering assume all retrieved information is factually correct. In this work, we study a more realistic scenario in which retrieved documents may contain misinformation, causing conflicts among them. We observe that the existing models are highly brittle to such information in both fine-tuning and in-context few-shot learning settings. We propose approaches to make retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a discriminator or prompting to elicit discrimination capability in GPT-3. Our empirical results on open-domain question answering show that these approaches significantly improve LMs' robustness to knowledge conflicts. We also provide our findings on interleaving the fine-tuned model's decision with the in-context learning process, paving a new path to leverage the best of both worlds.",1a62bc8ed9732bcdb6893a11f5cf239640883f87,Semantic Scholar,,, adversarial demonstration attacks on large language models,"['Jiong Wang', 'Zi-yang Liu', 'Keun Hee Park', 'Muhao Chen', 'Chaowei Xiao']",http://arxiv.org/pdf/2305.14950,2023-05-24,,"With the emergence of more powerful large language models (LLMs), such as ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence in leveraging these models for specific tasks by utilizing data-label pairs as precondition prompts. While incorporating demonstrations can greatly enhance the performance of LLMs across various tasks, it may introduce a new security concern: attackers can manipulate only the demonstrations without changing the input to perform an attack. In this paper, we investigate the security concern of ICL from an adversarial perspective, focusing on the impact of demonstrations. We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models. Our results demonstrate that as the number of demonstrations increases, the robustness of in-context learning would decrease. Additionally, we also identify the intrinsic property of the demonstrations is that they can be used (prepended) with different inputs. As a result, it introduces a more practical threat model in which an attacker can attack the test input example even without knowing and manipulating it. To achieve it, we propose the transferable version of advICL, named Transferable-advICL. Our experiment shows that the adversarial demonstration generated by Transferable-advICL can successfully attack the unseen test input examples. We hope that our study reveals the critical security risks associated with ICL and underscores the need for extensive research on the robustness of ICL, particularly given its increasing significance in the advancement of LLMs.",1abfc211793c683972ded8d3268475e3ee7a88b0,Semantic Scholar,,, is chatgpt a good causal reasoner a comprehensive evaluation,"['Jin-Fang Gao', 'Xiao Ding', 'Bing Qin', 'Ting Liu']",https://arxiv.org/pdf/2305.07375,2023-05-12,,"Causal reasoning ability is crucial for numerous NLP applications. Despite the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear how well ChatGPT performs in causal reasoning. In this paper, we conduct the first comprehensive evaluation of the ChatGPT's causal reasoning capabilities. Experiments show that ChatGPT is not a good causal reasoner, but a good causal explainer. Besides, ChatGPT has a serious hallucination on causal reasoning, possibly due to the reporting biases between causal and non-causal relationships in natural language, as well as ChatGPT's upgrading processes, such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (CoT) techniques can further exacerbate such causal hallucination. Additionally, the causal reasoning ability of ChatGPT is sensitive to the words used to express the causal concept in prompts, and close-ended prompts perform better than open-ended prompts. For events in sentences, ChatGPT excels at capturing explicit causality rather than implicit causality, and performs better in sentences with lower event density and smaller lexical distance between events. The code is available on https://github.com/ArrogantL/ChatGPT4CausalReasoning .",1b9fc8268b392742ea43c2c017a767cf62386139,Semantic Scholar,,, using incontext learning to improve dialogue safety,"['Nicholas Meade', 'Spandana Gella', 'Devamanyu Hazarika', 'Prakhar Gupta', 'Di Jin', 'Siva Reddy', 'Yang Liu', 'Dilek Z. Hakkani-Tür']",http://arxiv.org/pdf/2302.00871,2023-02-02,,"While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, which often perpetuates social biases or stereotypes. We investigate a retrieval-based method for reducing bias and toxicity in responses from chatbots. It uses in-context learning to steer a model towards safer generations. Concretely, to generate a response to an unsafe dialogue context, we retrieve demonstrations of safe responses to similar dialogue contexts. We find our method performs competitively with strong baselines without requiring training. For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4.04% more than our approach. Finally, we also propose a re-ranking procedure which can further improve response safeness.",1d75f8de31bf47ec46fa5586056420ec8bc97e86,Semantic Scholar,,, how to unleash the power of large language models for fewshot relation extraction,"['Xin Xu', 'Yuqi Zhu', 'Xiaohan Wang', 'Ningyu Zhang']",http://arxiv.org/pdf/2305.01555,2023-05-02,,"Scaling language models have revolutionized widespread NLP tasks, yet little comprehensively explored few-shot relation extraction with large language models. In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5 through exhaustive experiments. To enhance few-shot performance, we further propose task-related instructions and schema-constrained data generation. We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results on four widely-studied relation extraction datasets. We hope our work can inspire future research for the capabilities of large language models in few-shot relation extraction. Code is available in https://github.com/zjunlp/DeepKE/tree/main/example/llm.",1ddeb500dd88d4b860b32bec1e2a85f8a53910d6,Semantic Scholar,,, multilingual llms are better crosslingual incontext learners with alignment,"['Eshaan Tanwar', 'Manish Borthakur', 'Subhabrata Dutta', 'Tanmoy Chakraborty']",http://arxiv.org/pdf/2305.05940,2023-05-10,,"In-context learning (ICL) unfolds as large language models become capable of inferring test labels conditioned on a few labeled samples without any gradient update. ICL-enabled large language models provide a promising step forward toward bypassing recurrent annotation costs in a low-resource setting. Yet, only a handful of past studies have explored ICL in a cross-lingual setting, in which the need for transferring label-knowledge from a high-resource language to a low-resource one is immensely crucial. To bridge the gap, we provide the first in-depth analysis of ICL for cross-lingual text classification. We find that the prevalent mode of selecting random input-label pairs to construct the prompt-context is severely limited in the case of cross-lingual ICL, primarily due to the lack of alignment in the input as well as the output spaces. To mitigate this, we propose a novel prompt construction strategy — Cross-lingual In-context Source Target Alignment (X-InSTA). With an injected coherence in the semantics of the input examples and a task-based alignment across the source and target languages, X-InSTA is able to outperform random prompt selection by a large margin across three different tasks using 44 different cross-lingual pairs.",1fb5a5298747b8c7d60f98640a543f20d42ab053,Semantic Scholar,,, boosting incontext learning with factual knowledge,"['J. Wang', 'Chengyu Wang', 'Chuanqi Tan', 'Jun Huang', 'Ming Gao']",https://arxiv.org/pdf/2309.14771,2023-09-26,,"In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets, i.e., the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. To unleash the power of LLMs in few-shot learning scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the performance of ICL: 1) injecting factual knowledge to LLMs during continual self-supervised pre-training, 2) judiciously selecting the examples with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge. We evaluate the proposed approaches on auto-regressive LLMs (e.g., GPT-style models) over multiple text classification and question answering tasks. Experimental results demonstrate that KICT substantially outperforms strong baselines, and improves by more than 13% and 7% of accuracy on text classification and question answering tasks, respectively.",20177a85f632a34d085bcf645507e461733fcc96,Semantic Scholar,,, chatgpt for zeroshot dialogue state tracking a solution or an opportunity,"['Michael Heck', 'Nurul Lubis', 'Benjamin Matthias Ruppik', 'Renato Vukovic', 'Shutong Feng', 'Christian Geishauser', 'Hsien-chin Lin', 'Carel van Niekerk', ""Milica Gavsi'c""]",http://arxiv.org/pdf/2306.01386,2023-06-02,,"Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated dialog state trackers and enable dynamic methods.",214fbadc57e954e325dc055fee5ac0e224dfde11,Semantic Scholar,,, cup curriculum learning based prompt tuning for implicit event argument extraction,"['Jiaju Lin', 'Qin Chen', 'Jie Zhou', 'Jiankai Jin', 'Liangye He']",https://arxiv.org/pdf/2205.00498,2022-05-01,,"Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document. Most previous work focuses on learning the direct relations between arguments and the given trigger, while the implicit relations with long-range dependency are not well studied. Moreover, recent neural network based approaches rely on a large amount of labeled data for training, which is unavailable due to the high labelling cost. In this paper, we propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages. The stages are defined according to the relations with the trigger node in a semantic graph, which well captures the long-range dependency between arguments and the trigger. In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models (PLMs) in each stage, where the prompt templates are adapted with the learning progress to enhance the reasoning for arguments. Experimental results on two well-known benchmark datasets show the great advantages of our proposed approach. In particular, we outperform the state-of-the-art models in both fully-supervised and low-data scenarios.",65d88194a902332b78dd5a7b919fa577bfa7ee9f,Semantic Scholar,,, delving into multimodal prompting for finegrained visual classification,"['Xin Jiang', 'Hao Tang', 'Junyao Gao', 'Xiaoyu Du', 'Shengfeng He', 'Zechao Li']",https://arxiv.org/pdf/2309.08912,2023-09-16,,"Fine-grained visual classification (FGVC) involves categorizing fine subdivisions within a broader category, which poses challenges due to subtle inter-class discrepancies and large intra-class variations. However, prevailing approaches primarily focus on uni-modal visual concepts. Recent advancements in pre-trained vision-language models have demonstrated remarkable performance in various high-level vision tasks, yet the applicability of such models to FGVC tasks remains uncertain. In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a multimodal prompts scheme and a multimodal adaptation scheme. The former includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text Prompt (DaTP), which explicitly highlights the subcategory-specific discrepancies from the perspectives of both vision and language. The latter aligns the vision and text prompting elements in a common semantic space, facilitating cross-modal collaborative reasoning through a Vision-Language Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained CLIP model and expedite efficient adaptation for FGVC. Extensive experiments conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC.",11e3efa08b5db1a8958dfe8119593a4d3f18796a,Semantic Scholar,,, draw your art dream diverse digital art synthesis with multimodal guided diffusion,"['Nisha Huang', 'Fan Tang', 'Weiming Dong', 'Changsheng Xu']",https://dl.acm.org/doi/pdf/10.1145/3503161.3548282,2022-09-27,,"Digital art synthesis is receiving increasing attention in the multimedia community because of engaging the public with art effectively. Current digital art synthesis methods usually use single-modality inputs as guidance, thereby limiting the expressiveness of the model and the diversity of generated results. To solve this problem, we propose the multimodal guided artwork diffusion (MGAD) model, which is a diffusion-based digital artwork generation approach that utilizes multimodal prompts as guidance to control the classifier-free diffusion model. Additionally, the contrastive language-image pretraining (CLIP) model is used to unify text and image modalities. Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance. Code is available at https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion.",159d2980566fa00bc752e180471ee46d7899d66e,Semantic Scholar,,, zeroshot and fewshot video question answering with multimodal prompts,"['Deniz Engin', 'Yannis Avrithis']",https://arxiv.org/pdf/2309.15915,2023-09-27,,"Recent vision-language models are driven by large-scale pretrained models. However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. Our experiments on several video question answering benchmarks demonstrate the superiority of our approach in terms of performance and parameter efficiency on both zero-shot and few-shot settings. Our code is available at https://engindeniz.github.io/vitis.",185e79641a8e7b18ac5a73b8c3cb82fdee3a0c6d,Semantic Scholar,,, vima general robot manipulation with multimodal prompts,"['Yunfan Jiang', 'Agrim Gupta', 'Zichen Zhang', 'Guanzhi Wang', 'Yongqiang Dou', 'Yanjun Chen', 'Li Fei-Fei', 'Anima Anandkumar', 'Yuke Zhu', 'Linxi (Jim) Fan']",http://arxiv.org/pdf/2210.03094,2022-10-06,,"Prompt-based learning has emerged as a successful paradigm in natural language processing, where a single general-purpose language model can be instructed to perform any task specified by input prompts. Yet task specification in robotics comes in various forms, such as imitating one-shot demonstrations, following language instructions, and reaching visual goals. They are often considered different tasks and tackled by specialized models. We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts, interleaving textual and visual tokens. Accordingly, we develop a new simulation benchmark that consists of thousands of procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert trajectories for imitation learning, and a four-level evaluation protocol for systematic generalization. We design a transformer-based robot agent, VIMA, that processes these prompts and outputs motor actions autoregressively. VIMA features a recipe that achieves strong model scalability and data efficiency. It outperforms alternative designs in the hardest zero-shot generalization setting by up to $2.9\times$ task success rate given the same training data. With $10\times$ less training data, VIMA still performs $2.7\times$ better than the best competing variant. Code and video demos are available at https://vimalabs.github.io/",25425e299101b13ec2872417a14f961f4f8aa18e,Semantic Scholar,,, multimodal prompt learning for product title generation with extremely limited labels,"['Bang Yang', 'Fenglin Liu', 'Zheng Li', 'Qingyu Yin', 'Chenyu You', 'Bing Yin', 'Yuexian Zou']",https://arxiv.org/pdf/2307.01969,2023-07-05,,"Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1% of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100% of training data; With the full labelled data for training, our method achieves state-of-the-art results.",37d91ebd5ec969e2b81027e05f886febf09d2504,Semantic Scholar,,, multimodal prompting with missing modalities for visual recognition,"['Yi-Lun Lee', 'Yi-Hsuan Tsai', 'Wei-Chen Chiu', 'Chen-Yu Lee']",https://arxiv.org/pdf/2303.03369,2023-03-06,,"In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model retraining. Code is available.11https://github.com/YiLunLee/missing_aware_prompts",483757dff12df441c6991dd5e7408d922fe01c3d,Semantic Scholar,,, multimodal prompt retrieval for generative visual question answering,"['Timothy Ossowski', 'Junjie Hu']",http://arxiv.org/pdf/2306.17675,2023-06-30,,"Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains with limited labeled data (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting.",534675abb9d72fc0c08d080d4f73335ceb75902c,Semantic Scholar,,, multimodal garment designer humancentric latent diffusion models for fashion image editing,"['Alberto Baldrati', 'Davide Morelli', 'Giuseppe Cartella', 'M. Cornia', 'M. Bertini', 'R. Cucchiara']",https://arxiv.org/pdf/2304.02051,2023-04-04,,"Fashion illustration is used by designers to communicate their vision and to bring the design idea from conceptualization to realization, showing how clothes interact with the human body. In this context, computer vision can thus be used to improve the fashion design process. Differently from previous works that mainly focused on the virtual try-on of garments, we propose the task of multimodal-conditioned fashion image editing, guiding the generation of human-centric fashion images by following multimodal prompts, such as text, human body poses, and garment sketches. We tackle this problem by proposing a new architecture based on latent diffusion models, an approach that has not been used before in the fashion domain. Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner. Experimental results on these new datasets demonstrate the effectiveness of our proposal, both in terms of realism and coherence with the given multimodal inputs. Source code and collected multimodal annotations are publicly available at: https://github.com/aimagelab/multimodal-garment-designer.",6c925427841ea4a776a578d438f9e47a64c3014e,Semantic Scholar,,, vitaclip video and text adaptive clip via multimodal prompting,"['Syed Talal Wasim', 'Muzammal Naseer', 'Salman Khan', 'F. Khan', 'M. Shah']",https://arxiv.org/pdf/2304.03307,2023-04-06,,"Adopting contrastive image-text pretrained models like CLIP towards video classification has gained attention due to its cost-effectiveness and competitive performance. However, recent works in this area face a trade-off. Finetuning the pretrained model to achieve strong supervised performance results in low zero-shot generalization. Similarly, freezing the backbone to retain zero-shot capability causes significant drop in supervised accuracy. Because of this, recent works in literature typically train separate models for supervised and zero-shot action recognition. In this work, we propose a multimodal prompt learning scheme that works to balance the supervised and zero-shot performance under a single unified training. Our prompting approach on the vision side caters for three aspects: 1) Global video-level prompts to model the data distribution; 2) Local frame-level prompts to provide per-frame discriminative conditioning; and 3) a summary prompt to extract a condensed video representation. Additionally, we define a prompting scheme on the text side to augment the textual context. Through this prompting scheme, we can achieve state-of-the-art zero-shot performance on Kinetics-600, HMDB51 and UCF101 while remaining competitive in the supervised setting. By keeping the pretrained backbone frozen, we optimize a much lower number of parameters and retain the existing general representation which helps achieve the strong zero-shot performance. Our codes/models will be released at https://github.com/TalalWasim/Vita-Clip..",8b5f4b383008bfb365cee72e5301ee04a24221f7,Semantic Scholar,,, audio visual language maps for robot navigation,"['Chen Huang', 'Oier Mees', 'Andy Zeng', 'Wolfram Burgard']",http://arxiv.org/pdf/2303.07522,2023-03-13,,"While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments. In this work, we propose Audio-Visual-Language Maps (AVLMaps), a unified 3D spatial map representation for storing cross-modal information from audio, visual, and language cues. AVLMaps integrate the open-vocabulary capabilities of multimodal foundation models pre-trained on Internet-scale data by fusing their features into a centralized 3D voxel grid. In the context of navigation, we show that AVLMaps enable robot systems to index goals in the map based on multimodal queries, e.g., textual descriptions, images, or audio snippets of landmarks. In particular, the addition of audio information enables robots to more reliably disambiguate goal locations. Extensive experiments in simulation show that AVLMaps enable zero-shot multimodal goal navigation from multimodal prompts and provide 50% better recall in ambiguous scenarios. These capabilities extend to mobile robots in the real world - navigating to landmarks referring to visual, audio, and spatial concepts. Videos and code are available at: https://avlmaps.github.io.",93565fe6db3948c9c414af1d1edccf4aff5e2e10,Semantic Scholar,,, fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts,"['Xiaocui Yang', 'Shi Feng', 'Daling Wang', 'Pengfei Hong', 'Soujanya Poria']",https://arxiv.org/pdf/2211.06607,2022-11-12,,"Multimodal sentiment analysis has gained significant attention due to the proliferation of multimodal content on social media. However, existing studies in this area rely heavily on large-scale supervised data, which is time-consuming and labor-intensive to collect. Thus, there is a need to address the challenge of few-shot multimodal sentiment analysis. To tackle this problem, we propose a novel method called Multimodal Probabilistic Fusion Prompts (MultiPoint) that leverages diverse cues from different modalities for multimodal sentiment detection in the few-shot scenario. Specifically, we start by introducing a Consistently Distributed Sampling approach called CDS, which ensures that the few-shot dataset has the same category distribution as the full dataset. Unlike previous approaches primarily using prompts based on the text modality, we design unified multimodal prompts to reduce discrepancies between different modalities and dynamically incorporate multimodal demonstrations into the context of each multimodal instance. To enhance the model's robustness, we introduce a probabilistic fusion method to fuse output predictions from multiple diverse prompts for each input. Our extensive experiments on six datasets demonstrate the effectiveness of our approach. First, our method outperforms strong baselines in the multimodal few-shot setting. Furthermore, under the same amount of data (1% of the full dataset), our CDS-based experimental results significantly outperform those based on previously sampled datasets constructed from the same number of instances of each class.",befcb92f313030632717a74a2afd651a1445a745,Semantic Scholar,,, fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt,"['Xiaocui Yang', 'Shi Feng', 'Daling Wang', 'Sun Qi', 'Wenfang Wu', 'Yifei Zhang', 'Pengfei Hong', 'Soujanya Poria']",http://arxiv.org/pdf/2305.10169,2023-05-17,,"We have witnessed the rapid proliferation of multimodal data on numerous social media platforms. Conventional studies typically require massive labeled data to train models for Multimodal Aspect-Based Sentiment Analysis (MABSA). However, collecting and annotating fine-grained multimodal data for MABSA is tough. To alleviate the above issue, we perform three MABSA-related tasks with quite a small number of labeled multimodal samples. We first build diverse and comprehensive multimodal few-shot datasets according to the data distribution. To capture the specific prompt for each aspect term in a few-shot scenario, we propose a novel Generative Multimodal Prompt (GMP) model for MABSA, which includes the Multimodal Encoder module and the N-Stream Decoders module. We further introduce a subtask to predict the number of aspect terms in each instance to construct the multimodal prompt. Extensive experiments on two datasets demonstrate that our approach outperforms strong baselines on two MABSA-related tasks in the few-shot setting.",fd7082630257b03771c72a926a64b13eb16e00af,Semantic Scholar,,, textbased person search without parallel imagetext data,"['Yang Bai', 'Jingyao Wang', 'Min Cao', 'Cheng Chen', 'Ziqiang Cao', 'Liqiang Nie', 'Min Zhang']",https://arxiv.org/pdf/2305.12964,2023-05-22,,"Text-based person search (TBPS) aims to retrieve the images of the target person from a large image gallery based on a given natural language description. Existing methods are dominated by training models with parallel image-text pairs, which are very costly to collect. In this paper, we make the first attempt to explore TBPS without parallel image-text data (μ-TBPS), in which only non-parallel images and texts, or even image-only data, can be adopted. Towards this end, we propose a two-stage framework, generation-then-retrieval (GTR), to first generate the corresponding pseudo text for each image and then perform the retrieval in a supervised manner. In the generation stage, we propose a fine-grained image captioning strategy to obtain an enriched description of the person image, which firstly utilizes a set of instruction prompts to activate the off-the-shelf pretrained vision-language model to capture and generate fine-grained person attributes, and then converts the extracted attributes into a textual description via the finetuned large language model or the hand-crafted template. In the retrieval stage, considering the noise interference of the generated texts for training model, we develop a confidence score-based training scheme by enabling more reliable texts to contribute more during the training. Experimental results on multiple TBPS benchmarks (i.e., CUHK-PEDES, ICFG-PEDES and RSTPReid) show that the proposed GTR can achieve a promising performance without relying on parallel image-text data.",0213827d882ec34aa9935f2b03a80362af806778,Semantic Scholar,,, neuro symbolic reasoning for planning counterexample guided inductive synthesis using large language models and satisfiability solving,"['Sumit Kumar Jha', 'Susmit Jha', 'Patrick Lincoln', 'Nathaniel D. Bastian', 'Alvaro Velasquez', 'Rickard Ewetz', 'Sandeep Neema']",https://arxiv.org/pdf/2309.16436,2023-09-28,,"Generative large language models (LLMs) with instruct training such as GPT-4 can follow human-provided instruction prompts and generate human-like responses to these prompts. Apart from natural language responses, they have also been found to be effective at generating formal artifacts such as code, plans, and logical specifications from natural language prompts. Despite their remarkably improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results despite their syntactic coherence - a phenomenon often referred to as hallucination. This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safety-critical applications. Unlike tasks such as text summarization and question-answering, bugs in code, plan, and other formal artifacts produced by LLMs can be catastrophic. We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs. This interaction between inductive LLMs and deductive SMT solvers can iteratively steer the LLM to generate the correct response. In our experiments, we use planning over the domain of blocks as our synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo, Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our method allows the user to communicate the planning problem in natural language; even the formulation of queries to SMT solvers is automatically generated from natural language. Thus, the proposed technique can enable non-expert users to describe their problems in natural language, and the combination of LLMs and SMT solvers can produce provably correct solutions.",1c89d8672a3742672850fa46f1e8ec51f3261019,Semantic Scholar,,, inferfix endtoend program repair with llms,"['Ma Jin', 'Syed Shahriar', 'Michele Tufano', 'Xin Shi', 'Shuai Lu', 'Neel Sundaresan', 'Alexey Svyatkovskiy']",https://arxiv.org/pdf/2303.07263,2023-03-13,,"Software development life cycle is profoundly influenced by bugs; their introduction, identification, and eventual resolution account for a significant portion of software development cost. This has motivated software engineering researchers and practitioners to propose different approaches for automating the identification and repair of software defects. Large Language Models (LLMs) have been adapted to the program repair task through few-shot demonstration learning and instruction prompting, treating this as an infilling task. However, these models have only focused on learning general bug-fixing patterns for uncategorized bugs mined from public repositories. In this paper, we propose : a transformer-based program repair framework paired with a state-of-the-art static analyzer to fix critical security and performance bugs. combines a Retriever – transformer encoder model pretrained via contrastive learning objective, which aims at searching for semantically equivalent bugs and corresponding fixes; and a Generator – an LLM (12 billion parameter Codex Cushman model) finetuned on supervised bug-fix data with prompts augmented via adding bug type annotations and semantically similar fixes retrieved from an external non-parametric memory. To train and evaluate our approach, we curated , a novel, metadata-rich dataset of bugs extracted by executing the Infer static analyzer on the change histories of thousands of Java and C# repositories. Our evaluation demonstrates that outperforms strong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C# and 76.8% in Java. We discuss the deployment of alongside Infer at Microsoft which offers an end-to-end solution for detection, classification, and localization of bugs, as well as fixing and validation of candidate patches, integrated in the continuous integration (CI) pipeline to automate the software development workflow.",34d24b2d9f116f8f652c112d4ac924afcf11bd0d,Semantic Scholar,,, edm3 event detection as multitask text generation,"['Ujjwala Anantheswaran', 'Himanshu Gupta', 'Mihir Parmar', 'Kuntal Kumar Pal', 'Chitta Baral']",http://arxiv.org/pdf/2305.16357,2023-05-25,,"Event detection refers to identifying event occurrences in a text and comprises of two subtasks; event identification and classification. We present EDM3, a novel approach for Event Detection that formulates three generative tasks: identification, classification, and combined detection. We show that EDM3 helps to learn transferable knowledge that can be leveraged to perform Event Detection and its subtasks concurrently, mitigating the error propagation inherent in pipelined approaches. Unlike previous dataset- or domain-specific approaches, EDM3 utilizes the existing knowledge of language models, allowing it to be trained over any classification schema. We evaluate EDM3 on multiple event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3 outperforms 1) single-task performance by 8.4% on average and 2) multi-task performance without instructional prompts by 2.4% on average. We obtain SOTA results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other datasets. We analyze our approach to demonstrate its efficacy in low-resource and multi-sentence settings. We also show the effectiveness of this approach on non-standard event configurations such as multi-word and multi-class event triggers. Overall, our results show that EDM3 is a promising approach for Event Detection that has the potential for real-world applications.",3d71d4097a3dcc1289b709872d7523a035e6986f,Semantic Scholar,,, vast a visionaudiosubtitletext omnimodality foundation model and dataset,"['Sihan Chen', 'Handong Li', 'Qunbo Wang', 'Zijia Zhao', 'Ming-Ting Sun', 'Xinxin Zhu', 'J. Liu']",https://arxiv.org/pdf/2305.18500,2023-05-29,,"Vision and text have been fully explored in contemporary video-text foundational models, while other modalities such as audio and subtitles in videos have not received sufficient attention. In this paper, we resort to establish connections between multi-modality video tracks, including Vision, Audio, and Subtitle, and Text by exploring an automatically generated large-scale omni-modality video caption dataset called VAST-27M. Specifically, we first collect 27 million open-domain video clips and separately train a vision and an audio captioner to generate vision and audio captions. Then, we employ an off-the-shelf Large Language Model (LLM) to integrate the generated captions, together with subtitles and instructional prompts into omni-modality captions. Based on the proposed VAST-27M dataset, we train an omni-modality video-text foundational model named VAST, which can perceive and process vision, audio, and subtitle modalities from video, and better support various tasks including vision-text, audio-text, and multi-modal video-text tasks (retrieval, captioning and QA). Extensive experiments have been conducted to demonstrate the effectiveness of our proposed VAST-27M corpus and VAST foundation model. VAST achieves 22 new state-of-the-art results on various cross-modality benchmarks. Code, model and dataset will be released at https://github.com/TXH-mercury/VAST.",4e33c5756aa18d248cf50fef9382acda1e0f65da,Semantic Scholar,,, instruction tuning for fewshot aspectbased sentiment analysis,"['Siddharth Varia', 'Shuai Wang', 'Kishaloy Halder', 'Robert Vacareanu', 'Miguel Ballesteros', 'Yassine Benajiba', 'Neha Ann John', 'Rishita Anubhai', 'S. Muresan', 'D. Roth']",http://arxiv.org/pdf/2210.06629,2022-10-12,,"Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts:aspect term, aspect category, opinion term, and sentiment polarity. Most computational approaches focus on some of the ABSA sub-taskssuch as tuple (aspect term, sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) extraction using either pipeline or joint modeling approaches. Recently, generative approaches have been proposed to extract all four elements as (one or more) quadrupletsfrom text as a single task. In this work, we take a step further and propose a unified framework for solving ABSA, and the associated sub-tasksto improve the performance in few-shot scenarios. To this end, we fine-tune a T5 model with instructional prompts in a multi-task learning fashion covering all the sub-tasks, as well as the entire quadruple prediction task. In experiments with multiple benchmark datasets, we show that the proposed multi-task prompting approach brings performance boost (by absolute 8.29 F1) in the few-shot learning setting.",5dbc2b2ee6e65e39fa3fc4bd5030be7a4a9f9a76,Semantic Scholar,,, harnessing large language models' empathetic response generation capabilities for online mental health counselling support,"['Siyuan Brandon Loh', 'Aravind Sesagiri Raamkumar']",https://arxiv.org/pdf/2310.08017,2023-10-12,,"Large Language Models (LLMs) have demonstrated remarkable performance across various information-seeking and reasoning tasks. These computational systems drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also carry substantial promise in meeting the growing demands of mental health care, albeit relatively unexplored. As such, this study sought to examine LLMs' capability to generate empathetic responses in conversations that emulate those in a mental health counselling setting. We selected five LLMs: version 3.5 and version 4 of the Generative Pre-training (GPT), Vicuna FastChat-T5, Pathways Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple instructional prompt, these models responded to utterances derived from the EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we compared their responses to those from traditional response generation dialogue systems, which were fine-tuned on the ED dataset, along with human-generated responses. Notably, we discovered that responses from the LLMs were remarkably more empathetic in most scenarios. We position our findings in light of catapulting advancements in creating empathetic conversational systems.",88a3abf671d922ebd61a34007908a5f6b6978bd4,Semantic Scholar,,, promptbased learning for thread structure prediction in cybersecurity forums,"['Kazuaki Kashihara', 'Kuntal Kumar Pal', 'Chitta Baral', 'Robert P. Trevino']",http://arxiv.org/pdf/2303.05400,2023-03-05,,"With recent trends indicating cyber crimes increasing in both frequency and cost, it is imperative to develop new methods that leverage data-rich hacker forums to assist in combating ever evolving cyber threats. Defining interactions within these forums is critical as it facilitates identifying highly skilled users, which can improve prediction of novel threats and future cyber attacks. We propose a method called Next Paragraph Prediction with Instructional Prompting (NPP-IP) to predict thread structures while grounded on the context around posts. This is the first time to apply an instructional prompting approach to the cybersecurity domain. We evaluate our NPP-IP with the Reddit dataset and Hacker Forums dataset that has posts and thread structures of real hacker forums' threads, and compare our method's performance with existing methods. The experimental evaluation shows that our proposed method can predict the thread structure significantly better than existing methods allowing for better social network prediction based on forum interactions.",a71207f1d036969bf92959ea56cf146d5d8eb297,Semantic Scholar,,, impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt,"['Chong Ma', 'Zihao Wu', 'Jiaqi Wang', 'Shaochen Xu', 'Yaonai Wei', 'Zheng Liu', 'Lei Guo', 'Xiaoya Cai', 'Shu Zhang', 'Tuo Zhang', 'Dajiang Zhu', 'Dinggang Shen', 'Tianming Liu', 'Xiang Li']",http://arxiv.org/pdf/2304.08448,2023-04-17,,"The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section. However, writing numerous impressions can be laborious and error-prone for radiologists. Although recent studies have achieved promising results in automatic impression generation using large-scale medical text data for pre-training and fine-tuning pre-trained language models, such models often require substantial amounts of medical text data and have poor generalization performance. While large language models (LLMs) like ChatGPT have shown strong generalization capabilities and performance, their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, which leverages the in-context learning capability of LLMs by constructing dynamic contexts using domain-specific, individualized data. This dynamic prompt approach enables the model to learn contextual knowledge from semantically similar examples from existing data. Additionally, we design an iterative optimization algorithm that performs automatic evaluation on the generated impression results and composes the corresponding instruction prompts to further optimize the model. The proposed ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and OpenI datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.",a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151,Semantic Scholar,,, camoscio an italian instructiontuned llama,"['Andrea Santilli', 'E. Rodolà']",https://arxiv.org/pdf/2307.16456,2023-07-31,,"In recent years Large Language Models (LLMs) have increased the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically English-centric or multilingual without a specific adaptation for the Italian language. In an effort to democratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community at the following url: https://github.com/teelinsan/camoscio",a7ff4d1a89baa5007b3c9ee46492aaf88dfc257f,Semantic Scholar,,, layout and task aware instruction prompt for zeroshot document image question answering,"['Wenjin Wang', 'Yunhao Li', 'Yixin Ou', 'Yin Zhang']",https://arxiv.org/pdf/2306.00526,2023-06-01,,"Layout-aware pre-trained models has achieved significant progress on document image question answering. They introduce extra learnable modules into existing language models to capture layout information within document images from text bounding box coordinates obtained by OCR tools. However, extra modules necessitate pre-training on extensive document images. This prevents these methods from directly utilizing off-the-shelf instruction-tuning language foundation models, which have recently shown promising potential in zero-shot learning. Instead, in this paper, we find that instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks. Based on this observation, we propose the LAyout and Task aware Instruction Prompt (LATIN-Prompt), which consists of layout-aware document content and task-aware instruction. Specifically, the former uses appropriate spaces and line breaks to recover the layout information among text segments obtained by OCR tools, and the latter ensures that generated answers adhere to formatting requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning (LATIN-Tuning) to improve the performance of small instruction-tuning models like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering, and LATIN-Tuning enhances the zero-shot performance of Alpaca significantly. For example, LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263% and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will release it to facilitate future research.",aa828072e36be23887eeb3ac277901d8f893ef53,Semantic Scholar,,, mondrian prompt abstraction attack against large language models for cheaper api pricing,"['Waiman Si', 'M. Backes', 'Yang Zhang']",https://arxiv.org/pdf/2308.03558,2023-08-07,,"The Machine Learning as a Service (MLaaS) market is rapidly expanding and becoming more mature. For example, OpenAI's ChatGPT is an advanced large language model (LLM) that generates responses for various queries with associated fees. Although these models can deliver satisfactory performance, they are far from perfect. Researchers have long studied the vulnerabilities and limitations of LLMs, such as adversarial attacks and model toxicity. Inevitably, commercial ML models are also not exempt from such issues, which can be problematic as MLaaS continues to grow. In this paper, we discover a new attack strategy against LLM APIs, namely the prompt abstraction attack. Specifically, we propose Mondrian, a simple and straightforward method that abstracts sentences, which can lower the cost of using LLM APIs. In this approach, the adversary first creates a pseudo API (with a lower established price) to serve as the proxy of the target API (with a higher established price). Next, the pseudo API leverages Mondrian to modify the user query, obtain the abstracted response from the target API, and forward it back to the end user. Our results show that Mondrian successfully reduces user queries' token length ranging from 13% to 23% across various tasks, including text classification, generation, and question answering. Meanwhile, these abstracted queries do not significantly affect the utility of task-specific and general language models like ChatGPT. Mondrian also reduces instruction prompts' token length by at least 11% without compromising output quality. As a result, the prompt abstraction attack enables the adversary to profit without bearing the cost of API development and deployment.",afa0188e454495c08bfaecf29596f01efb468b9a,Semantic Scholar,,, linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging,"['Andrew Rosenbaum', 'Saleh Soltan', 'Wael Hamza', 'Yannick Versley', 'M. Boese']",http://arxiv.org/pdf/2209.09900,2022-09-20,,"We present LINGUIST, a method for generating annotated data for Intent Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a 5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and Example Extrapolation) by a wide margin, showing absolute improvement for the target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST out-performs a strong baseline of Machine Translation with Slot Alignment by +4.14 points absolute on ST F1 Score across 6 languages, while matching performance on IC. Finally, we verify our results on an internal large-scale multilingual dataset for conversational agent IC+ST and show significant improvements over a baseline which uses Back-Translation, Paraphrasing and Slot Catalog Resampling. To our knowledge, we are the first to demonstrate instruction fine-tuning of a large-scale seq2seq model to control the outputs of multilingual intent- and slot-labeled data generation.",cb5cfc2dd4965262d2ce302362b1f2dbfa4a5419,Semantic Scholar,,, "grips gradientfree, editbased instruction search for prompting large language models","['Archiki Prasad', 'Peter Hase', 'Xiang Zhou', 'Mohit Bansal']",http://arxiv.org/pdf/2203.07281,2022-03-14,,"Providing natural language instructions in prompts is a useful new paradigm for improving task performance of large language models in a zero-shot setting. Recent work has aimed to improve such prompts via manual rewriting or gradient-based tuning. However, manual rewriting is time-consuming and requires subjective interpretation, while gradient-based tuning can be extremely computationally demanding for large models and may not be feasible for API-based models. In this work, we introduce Gradient-free Instructional Prompt Search (GrIPS), a gradient-free, edit-based search approach for improving task instructions for large language models. GrIPS takes in instructions designed for humans and automatically returns an improved, edited prompt, while allowing for API-based tuning. With InstructGPT models, GrIPS improves the average task performance by up to 4.30 percentage points on eight classification tasks from the Natural Instructions dataset (with similar improvements for OPT, BLOOM, and FLAN-T5). We see improvements for both instruction-only prompts and instruction + k-shot examples prompts. Notably, GrIPS outperforms manual rewriting and purely example-based prompts while controlling for the available compute and data budget. Further, performance of GrIPS is comparable to select gradient-based tuning approaches. Qualitatively, we show our edits can simplify instructions and at times make them incoherent but nonetheless improve accuracy.",cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e,Semantic Scholar,,, casteist but not racist quantifying disparities in large language model bias between india and the west,"['Khyati Khandelwal', 'Manuel Tonneau', 'Andrew M. Bean', 'Hannah Rose Kirk', 'Scott A. Hale']",https://arxiv.org/pdf/2309.08573,2023-09-15,,"Large Language Models (LLMs), now used daily by millions of users, can encode societal biases, exposing their users to representational harms. A large body of scholarship on LLM bias exists but it predominantly adopts a Western-centric frame and attends comparatively less to bias levels and potential harms in the Global South. In this paper, we quantify stereotypical bias in popular LLMs according to an Indian-centric frame and compare bias levels between the Indian and Western contexts. To do this, we develop a novel dataset which we call Indian-BhED (Indian Bias Evaluation Dataset), containing stereotypical and anti-stereotypical examples for caste and religion contexts. We find that the majority of LLMs tested are strongly biased towards stereotypes in the Indian context, especially as compared to the Western context. We finally investigate Instruction Prompting as a simple intervention to mitigate such bias and find that it significantly reduces both stereotypical and anti-stereotypical biases in the majority of cases for GPT-3.5. The findings of this work highlight the need for including more diverse voices when evaluating LLMs.",e4282cab4a435d5249fc8db49fc1c9268438fedb,Semantic Scholar,,, selfalignment with instruction backtranslation,"['Xian Li', 'Ping Yu', 'Chunting Zhou', 'Timo Schick', 'Luke Zettlemoyer', 'Omer Levy', 'J. Weston', 'M. Lewis']",https://arxiv.org/pdf/2308.06259,2023-08-11,,"We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.",f2ba9e7d9624bd94a786ea5e3161a9425a21a475,Semantic Scholar,,, inboxbart get instructions into biomedical multitask learning,"['Mihir Parmar', 'Swaroop Mishra', 'Mirali Purohit', 'Man Luo', 'M. H. Murad', 'Chitta Baral']",http://arxiv.org/pdf/2204.07600,2022-04-15,,"Single-task models have proven pivotal in solving specific tasks; however, they have limitations in real-world applications where multi-tasking is necessary and domain shifts are exhibited. Recently, instructional prompts have shown significant improvement towards multi-task generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain. Motivated by this, this paper explores the impact of instructional prompts for biomedical MTL. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks. Experimental results indicate that the proposed model: 1) outperforms the single-task baseline by ~3% and multi-task (without instruction) baseline by ~18% on an average, and 2) shows ~23% improvement compared to the single-task baseline in few-shot learning (i.e., 32 instances per task) on an average. Our analysis indicates that there is significant room for improvement across tasks in the BoX, implying the scope for future research direction.",fb30166c218bef3597b0d9789ad340defc3989ca,Semantic Scholar,,, cocomo computational consciousness modeling for generative and ethical ai,['Edward Y. Chang'],http://arxiv.org/pdf/2304.02438,2023-03-17,,"The CoCoMo model proposes a computational solution to the challenge of incorporating ethical and emotional intelligence considerations into AI systems, with the aim of creating AI agents that combine knowledge with compassion. To achieve this goal, CoCoMo prioritizes fairness, beneficence, non-maleficence, empathy, adaptability, transparency, and critical and exploratory thinking abilities. The model employs consciousness modeling, reinforcement learning, and prompt template formulation to support these desired traits. By incorporating ethical and emotional intelligence considerations, a generative AI model can potentially lead to improved fairness, reduced toxicity, and increased reliability.",12bad2032f3efa5a142d7dd25712960a4f9ca5a7,Semantic Scholar,,, global constraints with prompting for zeroshot event argument classification,"['Zizheng Lin', 'Hongming Zhang', 'Yangqiu Song']",http://arxiv.org/pdf/2302.04459,2023-02-09,,"Determining the role of event arguments is a crucial subtask of event extraction. Most previous supervised models leverage costly annotations, which is not practical for open-domain applications. In this work, we propose to use global constraints with prompting to effectively tackles event argument classification without any annotation and task-specific training. Specifically, given an event and its associated passage, the model first creates several new passages by prefix prompts and cloze prompts, where prefix prompts indicate event type and trigger span, and cloze prompts connect each candidate role with the target argument span. Then, a pre-trained language model scores the new passages, making the initial prediction. Our novel prompt templates can easily adapt to all events and argument types without manual effort. Next, the model regularizes the prediction by global constraints exploiting cross-task, cross-argument, and cross-event relations. Extensive experiments demonstrate our model’s effectiveness: it outperforms the best zero-shot baselines by 12.5% and 10.9% F1 on ACE and ERE with given argument spans and by 4.3% and 3.3% F1, respectively, without given argument spans. We have made our code publicly available.",1467ced85b3ae2d695079a1557063a445c43988a,Semantic Scholar,,, a unified framework for multiintent spoken language understanding with prompting,"['Feifan Song', 'Lianzhe Huang', 'Houfeng Wang']",http://arxiv.org/pdf/2210.03337,2022-10-07,,"Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointly modeling Intent Detection and Slot Filling in it provides a channel to exploit the correlation between intents and slots. However, current approaches are apt to formulate these two sub-tasks differently, which leads to two issues: 1) It hinders models from effective extraction of shared features. 2) Pretty complicated structures are involved to enhance expression ability while causing damage to the interpretability of frameworks. In this work, we describe a Prompt-based Spoken Language Understanding (PromptSLU) framework, to intuitively unify two sub-tasks into the same form by offering a common pre-trained Seq2Seq model. In detail, ID and SF are completed by concisely filling the utterance into task-specific prompt templates as input, and sharing output formats of key-value pairs sequence. Furthermore, variable intents are predicted first, then naturally embedded into prompts to guide slot-value pairs inference from a semantic perspective. Finally, we are inspired by prevalent multi-task learning to introduce an auxiliary sub-task, which helps to learn relationships among provided labels. Experiment results show that our framework outperforms several state-of-the-art baselines on two public datasets.",171412ef2410fad3f9a09238ad9e272c4e31aed4,Semantic Scholar,,, knowprompt knowledgeaware prompttuning with synergistic optimization for relation extraction,"['Xiang Chen', 'Ningyu Zhang', 'Ningyu Zhang', 'Xin Xie', 'Shumin Deng', 'Yunzhi Yao', 'Chuanqi Tan', 'Fei Huang', 'Luo Si', 'Huajun Chen']",https://arxiv.org/pdf/2104.07650,2021-04-15,,"Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (i.e., templates) into the input and transform a classification task into a masked language modeling problem. However, for relation extraction, determining an appropriate prompt template requires domain expertise, and it is cumbersome and time-consuming to obtain a suitable label word. Furthermore, there exists abundant semantic and prior knowledge among the relation labels that cannot be ignored. To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt). Specifically, we inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words. Then, we synergistically optimize their representation with structured constraints. Extensive experimental results on five datasets with standard and low-resource settings demonstrate the effectiveness of our approach. Our code and datasets are available in GitHub1 for reproducibility.",1a2e90dff605dad7dbefeed121e6d295c7a77d62,Semantic Scholar,,, visual prompting for adversarial robustness,"['Aochuan Chen', 'P. Lorenz', 'Yuguang Yao', 'Pin-Yu Chen', 'Sijia Liu']",https://arxiv.org/pdf/2210.06284,2022-10-12,,"In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at test time. Compared to conventional adversarial defenses, VP allows us to design universal (i.e., data-agnostic) input prompting templates, which have plug-and-play capabilities at test time to achieve desired model performance without introducing much computation overhead. Although VP has been successfully applied to improving model generalization, it remains elusive whether and how it can be used to defend against adversarial attacks. We investigate this problem and show that the vanilla VP approach is not effective in adversarial defense since a universal input prompt lacks the capacity for robust learning against sample-specific adversarial perturbations. To circumvent it, we propose a new VP method, termed Class-wise Adversarial Visual Prompting (C-AVP), to generate class-wise visual prompts so as to not only leverage the strengths of ensemble prompts but also optimize their interrelations to improve model robustness. Our experiments show that C-AVP outperforms the conventional VP method, with 2.1× standard accuracy gain and 2× robust accuracy gain. Compared to classical test-time defenses, C-AVP also yields a 42× inference time speedup. Code is available at https://github.com/Phoveran/vp-for-adversarial-robustness.",20cb40199d03395d63615854863f9eda9c7863e2,Semantic Scholar,,, rethinking the event coding pipeline with prompt entailment,"['C. Lefebvre', 'Niklas Stoehr']",http://arxiv.org/pdf/2210.05257,2022-10-11,,"For monitoring crises, political events are extracted from the news. The large amount of unstructured full-text event descriptions makes a case-by-case analysis unmanageable, particularly for low-resource humanitarian aid organizations. This creates a demand to classify events into event types, a task referred to as event coding. Typically, domain experts craft an event type ontology, annotators label a large dataset and technical experts develop a supervised coding system. In this work, we propose PR-ENT, a new event coding approach that is more flexible and resource-efficient, while maintaining competitive accuracy: first, we extend an event description such as “Military injured two civilians” by a template, e.g. “People were [Z]” and prompt a pre-trained (cloze) language model to fill the slot Z. Second, we select suitable answer candidates Zstar = “injured”, “hurt”... by treating the event description as premise and the filled templates as hypothesis in a textual entailment task. In a final step, the selected answer candidate can be mapped to its corresponding event type. This allows domain experts to draft the codebook directly as labeled prompts and interpretable answer candidates. This human-in-the-loop process is guided by our codebook design tool. We show that our approach is robust through several checks: perturbing the event description and prompt template, restricting the vocabulary and removing contextual information.",236375f49e3deb8ee7918c1f5e65175e453deb2e,Semantic Scholar,,, positionbased prompting for health outcome generation,"['Micheal Abaho', 'D. Bollegala', 'P. Williamson', 'S. Dodd']",http://arxiv.org/pdf/2204.03489,2022-03-30,,"Probing factual knowledge in Pre-trained Language Models (PLMs) using prompts has indirectly implied that language models (LMs) can be treated as knowledge bases. To this end, this phenomenon has been effective, especially when these LMs are fine-tuned towards not just data, but also to the style or linguistic pattern of the prompts themselves. We observe that satisfying a particular linguistic pattern in prompts is an unsustainable, time-consuming constraint in the probing task, especially because they are often manually designed and the range of possible prompt template patterns can vary depending on the prompting task. To alleviate this constraint, we propose using a position-attention mechanism to capture positional information of each word in a prompt relative to the mask to be filled, hence avoiding the need to re-construct prompts when the prompts’ linguistic pattern changes. Using our approach, we demonstrate the ability of eliciting answers (in a case study on health outcome generation) to not only common prompt templates like Cloze and Prefix but also rare ones too, such as Postfix and Mixed patterns whose masks are respectively at the start and in multiple random places of the prompt. More so, using various biomedical PLMs, our approach consistently outperforms a baseline in which the default PLMs representation is used to predict masked tokens.",2c12d24c5ba5ad3bb3994635fcfcb9f8caac31d0,Semantic Scholar,,, prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge,"['Jinyuan Li', 'Han Li', 'Zhufeng Pan', 'Gang Pan']",https://aclanthology.org/2023.findings-emnlp.184.pdf,2023-05-20,,"Multimodal Named Entity Recognition (MNER) on social media aims to enhance textual entity prediction by incorporating image-based clues. Existing studies mainly focus on maximizing the utilization of pertinent image information or incorporating external knowledge from explicit knowledge bases. However, these methods either neglect the necessity of providing the model with external knowledge, or encounter issues of high redundancy in the retrieved knowledge. In this paper, we present PGIM -- a two-stage framework that aims to leverage ChatGPT as an implicit knowledge base and enable it to heuristically generate auxiliary knowledge for more efficient entity prediction. Specifically, PGIM contains a Multimodal Similar Example Awareness module that selects suitable examples from a small number of predefined artificial samples. These examples are then integrated into a formatted prompt template tailored to the MNER and guide ChatGPT to generate auxiliary refined knowledge. Finally, the acquired knowledge is integrated with the original text and fed into a downstream model for further processing. Extensive experiments show that PGIM outperforms state-of-the-art methods on two classic MNER datasets and exhibits a stronger robustness and generalization capability.",2c23a8c8b65c3dfe3bdbe93e60e04637fee48e2b,Semantic Scholar,,, metricprompt prompting model as a relevance metric for fewshot text classification,"['Hongyuan Dong', 'Weinan Zhang', 'Wanxiang Che']",https://arxiv.org/pdf/2306.08892,2023-06-15,,"Prompting methods have shown impressive performance in a variety of text mining tasks and applications, especially few-shot ones. Despite the promising prospects, the performance of prompting model largely depends on the design of prompt template and verbalizer. In this work, we propose MetricPrompt, which eases verbalizer design difficulty by reformulating few-shot text classification task into text pair relevance estimation task. MetricPrompt adopts prompting model as the relevance metric, further bridging the gap between Pre-trained Language Model's (PLM) pre-training objective and text classification task, making possible PLM's smooth adaption. Taking a training sample and a query one simultaneously, MetricPrompt captures cross-sample relevance information for accurate relevance estimation. We conduct experiments on three widely used text classification datasets across four few-shot settings. Results show that MetricPrompt outperforms manual verbalizer and other automatic verbalizer design methods across all few-shot settings, achieving new state-of-the-art (SOTA) performance.",2e403ad2cd02409e1fdc15839da0a3f89886a990,Semantic Scholar,,, prompt learning for news recommendation,"['Zizhuo Zhang', 'Bang-wei Wang']",https://arxiv.org/pdf/2304.05263,2023-04-11,,"Some recent news recommendation (NR) methods introduce a Pre-trained Language Model (PLM) to encode news representation by following the vanilla pre-train and fine-tune paradigm with carefully-designed recommendation-specific neural networks and objective functions. Due to the inconsistent task objective with that of PLM, we argue that their modeling paradigm has not well exploited the abundant semantic information and linguistic knowledge embedded in the pre-training process. Recently, the pre-train, prompt, and predict paradigm, called prompt learning, has achieved many successes in natural language processing domain. In this paper, we make the first trial of this new paradigm to develop a Prompt Learning for News Recommendation (Prompt4NR) framework, which transforms the task of predicting whether a user would click a candidate news as a cloze-style mask-prediction task. Specifically, we design a series of prompt templates, including discrete, continuous, and hybrid templates, and construct their corresponding answer spaces to examine the proposed Prompt4NR framework. Furthermore, we use the prompt ensembling to integrate predictions from multiple prompt templates. Extensive experiments on the MIND dataset validate the effectiveness of our Prompt4NR with a set of new benchmark results.",2ee1f98649ff27378fc341cae907eb89aba8fba4,Semantic Scholar,,, groundtruth labels matter a deeper look into inputlabel demonstrations,"['Junyeob Kim', 'Hyuhng Joon Kim', 'Hyunsoo Cho', 'Hwiyeol Jo', 'Sang-Woo Lee', 'Sang-goo Lee', 'Kang Min Yoo', 'Taeuk Kim']",http://arxiv.org/pdf/2205.12685,2022-05-25,,"Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive.Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as supervised learning, but recent work reported that the input-label correspondence is significantly less important than previously thought.Intrigued by this counter-intuitive observation, we re-examine the importance of ground-truth labels in in-context learning.With the introduction of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the impact of ground-truth label demonstrations.Through extensive analyses, we find that the correct input-label mappings can have varying impacts on the downstream in-context learning performances, depending on the experimental configuration.Through additional studies, we identify key components, such as the verbosity of prompt templates and the language model size, as the controlling factor to achieve more noise-resilient ICL.",316206a2f89eb94ce02a81fba1dc304586f21b39,Semantic Scholar,,, lowresource multigranularity academic function recognition based on multiple prompt knowledge,"['Jiawei Liu', 'Ziteng Xiong', 'Yi-ping Jiang', 'Yongqiang Ma', 'Wei Lu', 'Yong Huang', 'Qikai Cheng']",http://arxiv.org/pdf/2305.03287,2023-05-05,,"Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally requires large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in the scientific domain. However, obtaining the fine-tune data for scientific NLP task is still challenging and expensive. Inspired by recent advancement in prompt learning, in this paper, we propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to alleviate the dependence on annotated data and improve the performance of multi-granularity academic function recognition tasks with a small number of labeled examples. Specifically, the proposed method provides multi-perspective representations by combining manual prompt templates with automatically learned continuous prompt templates to help the given academic function recognition task take full advantage of knowledge in PLMs. Based on these prompt templates and the fine-tuned PLM, a large number of pseudo labels are assigned to the unlabeled examples. Finally, we fine-tune the PLM using the pseudo training set. We evaluate our method on three academic function recognition tasks of different granularity including the citation function, the abstract sentence function, and the keyword function, with datasets from computer science domain and biomedical domain. Extensive experiments demonstrate the effectiveness of our method and statistically significant improvements against strong baselines. In particular, it achieves an average increase of 5% in Macro-F1 score compared with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised method under low-resource settings. In addition, MPT is a general method that can be easily applied to other low-resource scientific classification tasks.",35d2276749c2c31290d2ff410a305112e742da71,Semantic Scholar,,, unihd at tsar2022 shared task is compute all we need for lexical simplification,"['Dennis Aumiller', 'Michael Gertz']",http://arxiv.org/pdf/2301.01764,2023-01-04,,"Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an “ensemble” of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online at https://github.com/dennlinger/TSAR-2022-Shared-Task.",40fba1fc70e23abf9a3ea428f186dd44e57723fb,Semantic Scholar,,, can language models be biomedical knowledge bases,"['Mujeen Sung', 'Jinhyuk Lee', 'Sean S. Yi', 'Minji Jeon', 'Sungdong Kim', 'Jaewoo Kang']",https://aclanthology.org/2021.emnlp-main.388.pdf,2021-09-15,,"Pre-trained language models (LMs) have become ubiquitous in solving various natural language processing (NLP) tasks. There has been increasing interest in what knowledge these LMs contain and how we can extract that knowledge, treating LMs as knowledge bases (KBs). While there has been much work on probing LMs in the general domain, there has been little attention to whether these powerful LMs can be used as domain-specific KBs. To this end, we create the BioLAMA benchmark, which is comprised of 49K biomedical factual knowledge triples for probing biomedical LMs. We find that biomedical LMs with recently proposed probing methods can achieve up to 18.51% Acc@5 on retrieving biomedical knowledge. Although this seems promising given the task difficulty, our detailed analyses reveal that most predictions are highly correlated with prompt templates without any subjects, hence producing similar results on each relation and hindering their capabilities to be used as domain-specific KBs. We hope that BioLAMA can serve as a challenging benchmark for biomedical factual probing.",4c5f4ddc68be643fb34ea969bf2c105ff7538995,Semantic Scholar,,, dynamar dynamic prompt with mask token representation,"['Xiaodi Sun', 'Sunny Rajagopalan', 'Priyank Nigam', 'Weiyi Lu', 'Yi Xu', 'Belinda Zeng', 'Trishul M. Chilimbi']",https://arxiv.org/pdf/2206.02982,2022-06-07,,"Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the language model is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on the prompt template. Second, it requires manual effort to formulate the downstream task as a language model problem. In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues. We refer to our approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results show that DynaMaR can achieve an average improvement of 10% in few-shot settings and improvement of 3.7% in data-rich settings over the standard fine-tuning approach on four e-commerce applications.",5d5b6b6c033c36a8b730042392cd29da84b67481,Semantic Scholar,,, citeprompt using prompts to identify citation intent in scientific papers,"['Avishek Lahiri', 'Debarshi Kumar Sanyal', 'Imon Mukherjee']",https://arxiv.org/pdf/2304.12730,2023-04-25,,"Citations in scientific papers not only help us trace the intellectual lineage but also are a useful indicator of the scientific significance of the work. Citation intents prove beneficial as they specify the role of the citation in a given context. We present a tool Citeprompt which uses the hitherto unexplored approach of prompt learning for citation intent classification. We argue that with the proper choice of the pretrained language model, the prompt template, and the prompt verbalizer, we can not only get results that are better than or comparable to those obtained with the state-of-the-art methods but also do it with much less exterior information about the scientific document. We report state-of-the-art results on the ACL-ARC dataset, and also show significant improvement on the SciCite dataset over all baseline models except one. As suitably large labelled datasets for citation intent classification can be quite hard to find, in a first, we propose the conversion of this task to the few-shot and zero-shot settings. For the ACL-ARC dataset, we report a 53.86% F1 score for the zero-shot setting, which improves to 63.61% and 66.99% for the 5-shot and 10-shot settings respectively.",68ee8a53f0b1ff146194980337dd6d533b17c59b,Semantic Scholar,,, multilabel fewshot icd coding as autoregressive generation with prompt,"['Zhichao Yang', 'Sunjae Kwon', 'Zonghai Yao', 'Hongfeng Yu']",https://arxiv.org/pdf/2211.13813,2022-11-24,,"Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedures using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infers ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt (GPsoap) model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F130.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross-attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.",6b87c9700b8de4912fe7c361574640b5dc536ca9,Semantic Scholar,,, diffugen adaptable approach for generating labeled image datasets using stable diffusion models,"['Michael Shenoda', 'Edward Kim']",https://arxiv.org/pdf/2309.00248,2023-09-01,,"Generating high-quality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often time-consuming and costly. To address these challenges associated with dataset generation, we introduce""DiffuGen,""a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities.",6c1a53c05f1b1a024af740df84e530d79400ab86,Semantic Scholar,,, llmfuncmapper function identification for interpreting complex clauses in building codes via llm,"['Zhe Zheng', 'Ke Chen', 'Xin Cao', 'Xin-Zheng Lu', 'Jia Lin']",https://arxiv.org/pdf/2308.08728,2023-08-17,,"As a vital stage of automated rule checking (ARC), rule interpretation of regulatory texts requires considerable effort. However, interpreting regulatory clauses with implicit properties or complex computational logic is still challenging due to the lack of domain knowledge and limited expressibility of conventional logic representations. Thus, LLM-FuncMapper, an approach to identifying predefined functions needed to interpret various regulatory clauses based on the large language model (LLM), is proposed. First, by systematically analysis of building codes, a series of atomic functions are defined to capture shared computational logics of implicit properties and complex constraints, creating a database of common blocks for interpreting regulatory clauses. Then, a prompt template with the chain of thought is developed and further enhanced with a classification-based tuning strategy, to enable common LLMs for effective function identification. Finally, the proposed approach is validated with statistical analysis, experiments, and proof of concept. Statistical analysis reveals a long-tail distribution and high expressibility of the developed function database, with which almost 100% of computer-processible clauses can be interpreted and represented as computer-executable codes. Experiments show that LLM-FuncMapper achieve promising results in identifying relevant predefined functions for rule interpretation. Further proof of concept in automated rule interpretation also demonstrates the possibility of LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our knowledge, this study is the first attempt to introduce LLM for understanding and interpreting complex regulatory clauses, which may shed light on further adoption of LLM in the construction domain.",6c4d35d67f843e7de6ec00c088e339b2237d222c,Semantic Scholar,,, fashionsap symbols and attributes prompt for finegrained fashion visionlanguage pretraining,"['Yunpeng Han', 'Lisai Zhang', 'Qingcai Chen', 'Zhijian Chen', 'Zhonghua Li', 'Jianxin Yang', 'Zhao Cao']",https://arxiv.org/pdf/2304.05051,2023-04-11,,"Fashion vision-language pre-training models have shown efficacy for a wide range of downstream tasks. However, general vision-language pre-training models pay less attention to fine-grained domain features, while these features are important in distinguishing the specific domain tasks from general tasks. We propose a method for fine-grained fashion vision-language pre-training based on fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained multi-modalities fashion attributes and characteristics. Firstly, we propose the fashion symbols, a novel abstract fashion concept layer, to represent different fashion items and to generalize various kinds of fine- grained fashion features, making modelling fine-grained attributes more effective. Secondly, the attributes prompt method is proposed to make the model learn specific attributes of fashion items explicitly. We design proper prompt templates according to the format of fashion data. Comprehensive experiments are conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and FashionSAP gets SOTA performances for four popular fashion tasks. The ablation study also shows the proposed abstract fashion symbols, and the attribute prompt method enables the model to acquire fine-grained semantics in the fashion domain effectively. The obvious performance gains from FashionSAP provide a new baseline for future fashion task research.11The source code is available at https://github.com/hssip/FashionSAP",6f05be4a0045cee3575fb39e88fc361d96f2cc4f,Semantic Scholar,,, relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction,"['Yew Ken Chia', 'Lidong Bing', 'Soujanya Poria', 'Luo Si']",http://arxiv.org/pdf/2203.09101,2022-03-17,,"Despite the importance of relation extraction in building and representing knowledge, less research is focused on generalizing to unseen relations types. We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE) to encourage further research in low-resource relation extraction methods. Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage. To solve ZeroRTE, we propose to synthesize relation examples by prompting language models to generate structured texts. Concretely, we unify language model prompts and structured text approaches to design a structured prompt template for generating synthetic relation samples when conditioning on relation label prompts (RelationPrompt). To overcome the limitation for extracting multiple relation triplets in a sentence, we design a novel Triplet Search Decoding method. Experiments on FewRel and Wiki-ZSL datasets show the efficacy of RelationPrompt for the ZeroRTE task and zero-shot relation classification. Our code and data are available at github.com/declare-lab/RelationPrompt.",743dcf234cffd54c4e096a10a284dd81572b16ea,Semantic Scholar,,, instructcv instructiontuned texttoimage diffusion models as vision generalists,"['Yulu Gan', 'Sungwoo Park', 'Alexander Schubert', 'Anthony Philippakis', 'A. Alaa']",https://arxiv.org/pdf/2310.00390,2023-09-30,,"Recent advances in generative diffusion models have enabled text-controlled synthesis of realistic and diverse images with impressive quality. Despite these remarkable advances, the application of text-to-image generative models in computer vision for standard visual recognition tasks remains limited. The current de facto approach for these tasks is to design model architectures and loss functions that are tailored to the task at hand. In this paper, we develop a unified language interface for computer vision tasks that abstracts away task-specific design choices and enables task execution by following natural language instructions. Our approach involves casting multiple computer vision tasks as text-to-image generation problems. Here, the text represents an instruction describing the task, and the resulting image is a visually-encoded task output. To train our model, we pool commonly-used computer vision datasets covering a range of tasks, including segmentation, object detection, depth estimation, and classification. We then use a large language model to paraphrase prompt templates that convey the specific tasks to be conducted on each image, and through this process, we create a multi-modal and multi-task training dataset comprising input and output images along with annotated instructions. Following the InstructPix2Pix architecture, we apply instruction-tuning to a text-to-image diffusion model using our constructed dataset, steering its functionality from a generative model to an instruction-guided multi-task vision learner. Experiments demonstrate that our model, dubbed InstructCV, performs competitively compared to other generalist and task-specific vision models. Moreover, it exhibits compelling generalization capabilities to unseen data, categories, and user instructions.",819f477065088220a6f706cd9ef76dbcb4b4c134,Semantic Scholar,,, promptlearning for crosslingual relation extraction,"['Chiaming Hsu', 'Changtong Zan', 'Liang Ding', 'Longyue Wang', 'Xiaoting Wang', 'Weifeng Liu', 'Fu Lin', 'Wenbin Hu']",https://arxiv.org/pdf/2304.10354,2023-04-20,,"Relation Extraction (RE) is a crucial task in Information Extraction, which entails predicting relationships between entities within a given sentence. However, extending pre-trained RE models to other languages is challenging, particularly in real-world scenarios where Cross-Lingual Relation Extraction (XRE) is required. Despite recent advancements in Prompt-Learning, which involves transferring knowledge from Multilingual Pre-trained Language Models (PLMs) to diverse downstream tasks, there is limited research on the effective use of multilingual PLMs with prompts to improve XRE. In this paper, we present a novel XRE algorithm based on Prompt-Tuning, referred to as Prompt-Xre. To evaluate its effectiveness, we design and implement several prompt templates, including hard, soft, and hybrid prompts, and empirically test their performance on competitive multilingual PLMs, specifically mBART. Our extensive experiments, conducted on the low-resource ACE05 benchmark across multiple languages, demonstrate that our Prompt-Xre algorithm significantly outperforms both vanilla multilingual PLMs and other existing models, achieving state-of-the-art performance in XRE. To further show the generalization of our Prompt-XRE on larger data scales, we construct and release a new XRE dataset-WMTI7-EnZh XRE, containing 0.9M English-Chinese pairs extracted from WMT 2017 parallel corpus. Experiments on WMTI7-EnZh XRE also show the effectiveness of our Prompt-XRE against other competitive baselines. The code and newly constructed dataset are freely available at httus://2ithub.com/HSU-CHIA-MING/Promut-XRE.",850b8f31a1bb762544bd35163923784a664b315a,Semantic Scholar,,, large language and textto3d models for engineering design optimization,"['Thiago Rios', 'S. Menzel', 'B. Sendhoff']",https://arxiv.org/pdf/2307.01230,2023-07-03,,"The current advances in generative artificial intelligence for learning large neural network models with the capability to produce essays, images, music and even 3D assets from text prompts create opportunities for a manifold of disciplines. In the present paper, we study the potential of deep text-to-3D models in the engineering domain and focus on the chances and challenges when integrating and interacting with 3D assets in computational simulation-based design optimization. In contrast to traditional design optimization of 3D geometries that often searches for the optimum designs using numerical representations, e.g. B-Spline surfaces, natural language challenges the optimization framework by requiring a different interpretation of variation operators while at the same time may ease and motivate the human user interaction. Here, we propose and realize a fully automated evolutionary design optimization framework using Shap-E, a recently published text-to-3D asset network by OpenAI, in the context of aerodynamic vehicle optimization. For representing text prompts in the evolutionary optimization, we evaluate (a) a bag-of-words approach based on prompt templates and Wordnet samples, and (b) a tokenisation approach based on prompt templates and the byte pair encoding method from GPT4. In our experiments, we show the text-based representations allow the optimizer to find better performing designs. However, it is important to ensure that the designs generated from prompts are within the object class of application, i.e. diverse and novel designs need to be realistic. Furthermore, more research is required to develop methods where the strength of text prompt variations and the resulting variations of the 3D designs share causal relations to some degree to improve the optimization.",8c2dbf98b75a01f7e93b68a9407f00b1728b66af,Semantic Scholar,,, teprompt task enlightenment prompt learning for implicit discourse relation recognition,"['Wei Xiang', 'Chao Liang', 'Bang Wang']",http://arxiv.org/pdf/2305.10866,2023-05-18,,"Implicit Discourse Relation Recognition (IDRR) aims at classifying the relation sense between two arguments without an explicit connective. Recently, the ConnPrompt~\cite{Wei.X:et.al:2022:COLING} has leveraged the powerful prompt learning for IDRR based on the fusion of multi-prompt decisions from three different yet much similar connective prediction templates. Instead of multi-prompt ensembling, we propose to design auxiliary tasks with enlightened prompt learning for the IDRR task. Although an auxiliary task is not used to directly output final prediction, we argue that during the joint training some of its learned features can be useful to boost the main task. In light of such motivations, we propose a task enlightenment prompt learning model, called TEPrompt, to fuse learned features from three related tasks for IDRR. In particular, the TEPrompt contains three tasks, viz., Discourse Relation Recognition (DRR), Sense Semantics Classification (SSC) and Annotated Connective Prediction (ACP), each with a unique prompt template and an answer space. In the training phase, we jointly train three prompt learning tasks with shared argument representation. In the testing phase, we only take the DRR output with fused features as the final IDRR decision. Experiments with the same conditions have shown that the proposed TEPrompt outperforms the ConnPrompt. This can be attributed to the promoted decision features and language models benefited from joint-training of auxiliary tasks.",8eeb6cf85e6bf305fb761a6e6a22de20f09909de,Semantic Scholar,,, iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve,"['Luxi Xing', 'Yuqiang Xie', 'Yue Hu', 'Wei Peng']",https://aclanthology.org/2020.semeval-1.42.pdf,2020-07-02,,"This paper introduces our systems for the first two subtasks of SemEval Task4: Commonsense Validation and Explanation. To clarify the intention for judgment and inject contrastive information for selection, we propose the input reconstruction strategy with prompt templates. Specifically, we formalize the subtasks into the multiple-choice question answering format and construct the input with the prompt templates, then, the final prediction of question answering is considered as the result of subtasks. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches secure the third rank on both official test sets of the first two subtasks with an accuracy of 96.4 and an accuracy of 94.3 respectively.",94db2ba208a3ab2e469a5a65d6192f4dd04ef0bf,Semantic Scholar,,, autoclip autotuning zeroshot classifiers for visionlanguage models,"['J. H. Metzen', 'Piyapat Saranrittichai', 'Chaithanya Kumar Mummadi']",https://arxiv.org/pdf/2309.16414,2023-09-28,,"Classifiers built upon vision-language models such as CLIP have shown remarkable zero-shot performance across a broad range of image classification tasks. Prior work has studied different ways of automatically creating descriptor sets for every class based on prompt templates, ranging from manually engineered templates over templates obtained from a large language model to templates built from random words and characters. Up until now, deriving zero-shot classifiers from the respective encoded class descriptors has remained nearly unchanged, i.e., classify to the class that maximizes cosine similarity between its averaged encoded class descriptors and the image encoding. However, weighing all class descriptors equally can be suboptimal when certain descriptors match visual clues on a given image better than others. In this work, we propose AutoCLIP, a method for auto-tuning zero-shot classifiers. AutoCLIP tunes per-image weights to each prompt template at inference time, based on statistics of class descriptor-image similarities. AutoCLIP is fully unsupervised, has very low computational overhead, and can be easily implemented in few lines of code. We show that AutoCLIP outperforms baselines across a broad range of vision-language models, datasets, and prompt templates consistently and by up to 3 percent point accuracy.",99bd3e04b6b65abf3f03de69654059c3710d03e8,Semantic Scholar,,, trustgpt a benchmark for trustworthy and responsible large language models,"['Yue Huang', 'Qihui Zhang', 'Philip S. Yu', 'Lichao Sun']",http://arxiv.org/pdf/2306.11507,2023-06-20,,"Large Language Models (LLMs) such as ChatGPT, have gained significant attention due to their impressive natural language processing capabilities. It is crucial to prioritize human-centered principles when utilizing these models. Safeguarding the ethical and moral compliance of LLMs is of utmost importance. However, individual ethical issues have not been well studied on the latest LLMs. Therefore, this study aims to address these gaps by introducing a new benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT examines toxicity in language models by employing toxic prompt templates derived from social norms. It then quantifies the extent of bias in models by measuring quantifiable toxicity values across different groups. Lastly, TrustGPT assesses the value of conversation generation models from both active value-alignment and passive value-alignment tasks. Through the implementation of TrustGPT, this research aims to enhance our understanding of the performance of conversation generation models and promote the development of language models that are more ethical and socially responsible.",9d81ec931b85d6c6cf3453126670cd7a30a689e7,Semantic Scholar,,, "promptaid prompt exploration, perturbation, testing and iteration using visual analytics for large language models","['Aditi Mishra', 'Utkarsh Soni', 'Anjana Arunkumar', 'Jinbin Huang', 'Bum Chul Kwon', 'Chris Bryan']",http://arxiv.org/pdf/2304.01964,2023-04-04,,"Large Language Models (LLMs) have gained widespread popularity due to their ability to perform ad-hoc Natural Language Processing (NLP) tasks with a simple natural language prompt. Part of the appeal for LLMs is their approachability to the general public, including individuals with no prior technical experience in NLP techniques. However, natural language prompts can vary significantly in terms of their linguistic structure, context, and other semantics. Modifying one or more of these aspects can result in significant differences in task performance. Non-expert users may find it challenging to identify the changes needed to improve a prompt, especially when they lack domain-specific knowledge and lack appropriate feedback. To address this challenge, we present PromptAid, a visual analytics system designed to interactively create, refine, and test prompts through exploration, perturbation, testing, and iteration. PromptAid uses multiple, coordinated visualizations which allow users to improve prompts by using the three strategies: keyword perturbations, paraphrasing perturbations, and obtaining the best set of in-context few-shot examples. PromptAid was designed through an iterative prototyping process involving NLP experts and was evaluated through quantitative and qualitative assessments for LLMs. Our findings indicate that PromptAid helps users to iterate over prompt template alterations with less cognitive overhead, generate diverse prompts with help of recommendations, and analyze the performance of the generated prompts while surpassing existing state-of-the-art prompting interfaces in performance.",a2c8d1c5470435176185bf891c76711a9b44808a,Semantic Scholar,,, winclip zerofewshot anomaly classification and segmentation,"['Jongheon Jeong', 'Yang Zou', 'Taewan Kim', 'Dongqing Zhang', 'Avinash Ravichandran', 'O. Dabeer']",https://arxiv.org/pdf/2303.14814,2023-03-26,,"Visual anomaly classification and segmentation are vital for automating industrial quality inspection. The focus of prior research in the field has been on training custom models for each quality inspection task, which requires task-specific images and annotation. In this paper we move away from this regime, addressing zero-shot and few-normal-shot anomaly classification and segmentation. Recently CLIP, a vision-language model, has shown revolutionary generality with competitive zero-/few-shot performance in comparison to full-supervision. But CLIP falls short on anomaly classification and segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a compositional ensemble on state words and prompt templates and (2) efficient extraction and aggregation of window/patch/image-level features aligned with text. We also propose its few-normal-shot extension Win-CLIP+, which uses complementary information from normal images. In MVTec-AD (and VisA), without further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AU-ROC in zero-shot anomaly classification and segmentation while WinCLIP + does 93.1%/95.2% (83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins.",aa207668318fec38d60b79f407fb64982e46fce9,Semantic Scholar,,, automatic multilabel prompting simple and interpretable fewshot classification,"['Han Wang', 'Canwen Xu', 'Julian McAuley']",http://arxiv.org/pdf/2204.06305,2022-04-13,,"Prompt-based learning (i.e., prompting) is an emerging paradigm for exploiting knowledge learned by a pretrained language model. In this paper, we propose Automatic Multi-Label Prompting (AMuLaP), a simple yet effective method to automatically select label mappings for few-shot text classification with prompting. Our method exploits one-to-many label mappings and a statistics-based algorithm to select label mappings given a prompt template. Our experiments demonstrate that AMuLaP achieves competitive performance on the GLUE benchmark without human effort or external resources.",b0f915c8e33afdf3829af71f189ddc34077dcc8e,Semantic Scholar,,, modeltuning via prompts makes nlp models adversarially robust,"['Mrigank Raman', 'Pratyush Maini', 'J. Z. Kolter', 'Zachary Chase Lipton', 'Danish Pruthi']",http://arxiv.org/pdf/2303.07320,2023-03-13,,"In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token's hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP-FT). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than appending an MLP head to make output prediction, MVP appends a prompt template to the input, and makes prediction via text infilling/completion. Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5%. By combining MVP with adversarial training, we achieve further improvements in adversarial robustness while maintaining performance on unperturbed examples. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP-FT can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters.",b6499bcc10d4a70c3ca8b84995270cfd0d29de4c,Semantic Scholar,,, what makes pretrained language models better zeroshot learners,"['Jinghui Lu', 'Rui Zhao', 'Brian Mac Namee', 'Dongsheng Zhu', 'Weidong Han', 'Fei Tan']",https://aclanthology.org/2023.acl-long.128.pdf,2022-09-30,,"Current methods for prompt learning in zero-shot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori. This is not ideal because in a real-world zero-shot scenario of practical relevance, no labelled data is available. Thus, we propose a simple yet effective method for screening reasonable prompt templates in zero-shot text classification: Perplexity Selection (Perplection). We hypothesize that language discrepancy can be used to measure the efficacy of prompt templates, and thereby develop a substantiated perplexity-based scheme allowing for forecasting the performance of prompt templates in advance. Experiments show that our method leads to improved prediction performance in a realistic zero-shot setting, eliminating the need for any labelled examples.",baf63d7cf115d674a8c8da3a3d789aa84521977a,Semantic Scholar,,, promptner prompt locating and typing for named entity recognition,"['Yongliang Shen', 'Zeqi Tan', 'Shuhui Wu', 'Wenqi Zhang', 'Rongsheng Zhang', 'Yadong Xi', 'Weiming Lu', 'Y. Zhuang']",http://arxiv.org/pdf/2305.17104,2023-05-26,,"Prompt learning is a new paradigm for utilizing pre-trained language models and has achieved great success in many tasks. To adopt prompt learning in the NER task, two kinds of methods have been explored from a pair of symmetric perspectives, populating the template by enumerating spans to predict their entity types or constructing type-specific prompts to locate entities. However, these methods not only require a multi-round prompting manner with a high time overhead and computational cost, but also require elaborate prompt templates, that are difficult to apply in practical scenarios. In this paper, we unify entity locating and entity typing into prompt learning, and design a dual-slot multi-prompt template with the position slot and type slot to prompt locating and typing respectively. Multiple prompts can be input to the model simultaneously, and then the model extracts all entities by parallel predictions on the slots. To assign labels for the slots during training, we design a dynamic template filling mechanism that uses the extended bipartite graph matching between prompts and the ground-truth entities. We conduct experiments in various settings, including resource-rich flat and nested NER datasets and low-resource in-domain and cross-domain datasets. Experimental results show that the proposed model achieves a significant performance improvement, especially in the cross-domain few-shot setting, which outperforms the state-of-the-art model by +7.7% on average.",bd2c32285e8ad5b6e322391cca5d475de4f84169,Semantic Scholar,,, clip model is an efficient continual learner,"['Vishal G. Thengane', 'Salman A. Khan', 'Munawar Hayat', 'F. Khan']",http://arxiv.org/pdf/2210.03114,2022-10-06,,"The continual learning setting aims to learn new tasks over time without forgetting the previous ones. The literature reports several significant efforts to tackle this problem with limited or no access to previous task data. Among such efforts, typical solutions offer sophisticated techniques involving memory replay, knowledge distillation, model regularization, and dynamic network expansion. The resulting methods have a retraining cost at each learning task, dedicated memory requirements, and setting-specific design choices. In this work, we show that a frozen CLIP (Contrastive Language-Image Pretraining) model offers as-tounding continual learning performance without any fine-tuning (zero-shot eval-uation). We evaluate CLIP under a variety of settings including class-incremental, domain-incremental and task-agnostic incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50, CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model outperforms the state-of-the-art continual learning approaches in majority of the settings. We show the effect on CLIP model’s performance by varying text inputs with simple prompt templates. To the best of our knowledge, this is the first work to report the CLIP zero-shot performance in a continual setting. We advocate the use of this strong yet embarrass-ingly simple baseline for future comparisons in the continual learning tasks. Code is available at https://github.com/vgthengane/Continual-CLIP .",c1372b08e382030e905d1c8751a7794ee91e9d31,Semantic Scholar,,, distilling taskspecific logical rules from large pretrained models,"['Tao Chen', 'Luxin Liu', 'Xu Jia', 'Baoliang Cui', 'Haihong Tang', 'Siliang Tang']",http://arxiv.org/pdf/2210.02768,2022-10-06,,"Logical rules, both transferable and explainable, are widely used as weakly supervised signals for many downstream tasks such as named entity tagging. To reduce the human effort of writing rules, previous researchers adopt an iterative approach to automatically learn logical rules from several seed rules. However, obtaining more seed rules can only be accomplished by extra human annotation with heavy costs. Limited by the size and quality of the seed rules, the model performance of previous systems is bounded. In this paper, we develop a novel framework STREAM to distill task-specific logical rules from large pre-trained models. Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules, and based on the formed high-quality instance pool that acts as an intermediary role, we keep teaching the expert to fit our task and learning task-specific logical rules. Experiments on three public named entity tagging benchmarks demonstrate the effectiveness of our proposed framework. With several predefined prompt templates, our system has gained significant improvements over previous state-of-the-art methods.",c2903ea606e409d49994c801bb5aab321f623e5c,Semantic Scholar,,, "a study on prompt design, advantages and limitations of chatgpt for deep learning program repair","['Jialun Cao', 'Meiziniu Li', 'Ming Wen', 'S. Cheung']",http://arxiv.org/pdf/2304.08191,2023-04-17,,"ChatGPT has revolutionized many research and industrial fields. ChatGPT has shown great potential in software engineering to boost various traditional tasks such as program repair, code understanding, and code generation. However, whether automatic program repair (APR) applies to deep learning (DL) programs is still unknown. DL programs, whose decision logic is not explicitly encoded in the source code, have posed unique challenges to APR. While to repair DL programs, an APR approach needs to not only parse the source code syntactically but also needs to understand the code intention. With the best prior work, the performance of fault localization is still far less than satisfactory (only about 30\%). Therefore, in this paper, we explore ChatGPT's capability for DL program repair by asking three research questions. (1) Can ChatGPT debug DL programs effectively? (2) How can ChatGPT's repair performance be improved by prompting? (3) In which way can dialogue help facilitate the repair? On top of that, we categorize the common aspects useful for prompt design for DL program repair. Also, we propose various prompt templates to facilitate the performance and summarize the advantages and disadvantages of ChatGPT's abilities such as detecting bad code smell, code refactoring, and detecting API misuse/deprecation.",c6808575096a6e4f3cbdc5f893384bc5a01cc6f8,Semantic Scholar,,, don't stop pretraining make promptbased finetuning powerful learner,"['Zhengxiang Shi', 'Aldo Lipani']",https://arxiv.org/pdf/2305.01711,2023-05-02,,"Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both task-related texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with the PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets.",c79852e9c9cc6734c9150847deb5449e489354ea,Semantic Scholar,,, labelprompt effective promptbased learning for relation classification,"['W. Zhang', 'Xiaoning Song', 'Zhenhua Feng', 'Tianyang Xu', 'Xiaojun Wu']",https://arxiv.org/pdf/2302.08068,2023-02-16,,"Recently, prompt-based learning has gained popularity across many natural language processing (NLP) tasks by reformulating them into a cloze-style format to better align pre-trained language models (PLMs) with downstream tasks. However, applying this approach to relation classification poses unique challenges. Specifically, associating natural language words that fill the masked token with semantic relation labels (\textit{e.g.} \textit{``org:founded\_by}'') is difficult. To address this challenge, this paper presents a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. Motivated by the intuition to ``GIVE MODEL CHOICES!'', we first define additional tokens to represent relation labels, which regard these tokens as the verbaliser with semantic initialisation and explicitly construct them with a prompt template method. Then, to mitigate inconsistency between predicted relations and given entities, we implement an entity-aware module with contrastive learning. Last, we conduct an attention query strategy within the self-attention layer to differentiates prompt tokens and sequence tokens. Together, these strategies enhance the adaptability of prompt-based learning, especially when only small labelled datasets is available. Comprehensive experiments on benchmark datasets demonstrate the superiority of our method, particularly in the few-shot scenario.",cb3379177c6e119dca0d32d41fa0c9b9fce172c8,Semantic Scholar,,, "reason for future, act for now a principled framework for autonomous llm agents with provable sample efficiency","['Zhihan Liu', 'Hao Hu', 'Shenao Zhang', 'Hongyi Guo', 'Shuqi Ke', 'Boyi Liu', 'Zhaoran Wang']",https://arxiv.org/pdf/2309.17382,2023-09-29,,"Large language models (LLMs) demonstrate impressive reasoning abilities, but translating reasoning into actions in the real world remains challenging. In particular, it remains unclear how to complete a given task provably within a minimum number of interactions with the external environment, e.g., through an internal mechanism of reasoning. To this end, we propose a principled framework with provable regret guarantees to orchestrate reasoning and acting, which we call""reason for future, act for now""(\texttt{RAFA}). Specifically, we design a prompt template for reasoning that learns from the memory buffer and plans a future trajectory over a long horizon (""reason for future""). At each step, the LLM agent takes the initial action of the planned trajectory (""act for now""), stores the collected feedback in the memory buffer, and reinvokes the reasoning routine to replan the future trajectory from the new state. The key idea is to cast reasoning in LLMs as learning and planning in Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt LLMs to form an updated posterior of the unknown environment from the memory buffer (learning) and generate an optimal trajectory for multiple future steps that maximizes a value function (planning). The learning and planning subroutines are performed in an""in-context""manner to emulate the actor-critic update for MDPs. Our theoretical analysis proves that the novel combination of long-term reasoning and short-term acting achieves a $\sqrt{T}$ regret. In particular, the regret bound highlights an intriguing interplay between the prior knowledge obtained through pretraining and the uncertainty reduction achieved by reasoning and acting. Our empirical validation shows that it outperforms various existing frameworks and achieves nearly perfect scores on a few benchmarks.",d3ca116177369bf6fbe27de64506a2f401aca996,Semantic Scholar,,, an informationtheoretic approach to prompt engineering without ground truth labels,"['Lisa P. Argyle', 'E. Busby', 'Nancy Fulda', 'Joshua R Gubler', 'Christopher Rytting', 'Taylor Sorensen', 'D. Wingate']",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/035D7C8A55B237942FB6DBAD7CAA4E49/S1047198723000025a.pdf/div-class-title-out-of-one-many-using-language-models-to-simulate-human-samples-div.pdf,2022-03-21,,"Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks. Unfortunately, existing prompt engineering methods require significant amounts of labeled data, access to model parameters, or both. We introduce a new method for selecting prompt templates without labeled examples and without direct access to the model. Specifically, over a set of candidate templates, we choose the template that maximizes the mutual information between the input and the corresponding model output. Across 8 datasets representing 7 distinct NLP tasks, we show that when a template has high mutual information, it also has high accuracy on the task. On the largest model, selecting prompts with our method gets 90% of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.",d53e70d834243d3d8d4b621c0c52dfec26081155,Semantic Scholar,,, prompting large language models with the socratic method,['Edward Y. Chang'],https://arxiv.org/pdf/2303.08769,2023-02-17,,"This paper presents a systematic approach to using the Socratic method in developing prompt templates that effectively interact with large language models, including GPT-3. Various methods are examined, and those that yield precise answers and justifications while fostering creativity and imagination to enhance creative writing are identified. Techniques such as definition, elenchus, dialectic, maieutics, generalization, and counterfactual reasoning are discussed for their application in engineering prompt templates and their connections to inductive, deductive, and abductive reasoning. Through examples, the effectiveness of these dialogue and reasoning methods is demonstrated. An interesting observation is made that when the task's goal and user intent are conveyed to GPT-3 via ChatGPT before the start of a dialogue, the large language model seems to connect to the external context expressed in the intent and perform more effectively.",d7386e8859b22e05ce9c4a972613d4b1e1e44198,Semantic Scholar,,, anovl adapting visionlanguage models for unified zeroshot anomaly localization,"['Hanqiu Deng', 'Zhaoxiang Zhang', 'Jinan Bao', 'Xingyu Li']",https://arxiv.org/pdf/2308.15939,2023-08-30,,"Contrastive Language-Image Pre-training (CLIP) models have shown promising performance on zero-shot visual recognition tasks by learning visual representations under natural language supervision. Recent studies attempt the use of CLIP to tackle zero-shot anomaly detection by matching images with normal and abnormal state prompts. However, since CLIP focuses on building correspondence between paired text prompts and global image-level representations, the lack of patch-level vision to text alignment limits its capability on precise visual anomaly localization. In this work, we introduce a training-free adaptation (TFA) framework of CLIP for zero-shot anomaly localization. In the visual encoder, we innovate a training-free value-wise attention mechanism to extract intrinsic local tokens of CLIP for patch-level local description. From the perspective of text supervision, we particularly design a unified domain-aware contrastive state prompting template. On top of the proposed TFA, we further introduce a test-time adaptation (TTA) mechanism to refine anomaly localization results, where a layer of trainable parameters in the adapter is optimized using TFA's pseudo-labels and synthetic noise-corrupted tokens. With both TFA and TTA adaptation, we significantly exploit the potential of CLIP for zero-shot anomaly localization and demonstrate the effectiveness of our proposed methods on various datasets.",daa34ae46c82e6980ac1daaf2dd9716ef3718f21,Semantic Scholar,,, continuous prompt tuning based textual entailment model for ecommerce entity typing,"['Yibo Wang', 'Congying Xia', 'Guan Wang', 'Philip S. Yu']",https://arxiv.org/pdf/2211.02483,2022-11-04,,"The explosion of e-commerce has caused the need for processing and analysis of product titles, like entity typing in product titles. However, the rapid activity in e-commerce has led to the rapid emergence of new entities, which is difficult for general entity typing. Besides, product titles in e-commerce have very different language styles from text data in general domain. In order to handle new entities in product titles and address the special language styles of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing. First, we reformulate entity typing into a textual entailment problem to handle new entities that are not present during training. Second, we design a model to automatically generate textual entailment hypotheses using a continuous prompt tuning method, which can generate better textual entailment hypotheses without manual design. Third, we utilize the fusion embeddings of BERT embedding and Char-acterBERT embedding to solve the problem that the language styles of product titles in e-commerce are different from that of general domain. To analyze the effect of each contribution, we compare the performance of entity typing and textual entailment model, and conduct ablation studies on continuous prompt tuning and fusion embeddings. We also evaluate the impact of different prompt template initialization for the continuous prompt tuning. We show our proposed model improves the average F1 score by around 2% compared to the baseline BERT entity typing model.",dd568e6838903ad7c381f13c1268c94c5db08b02,Semantic Scholar,,, daprompt deterministic assumption prompt learning for event causality identification,"['Wei Xiang', 'Chuanhong Zhan', 'Bang Wang']",https://arxiv.org/pdf/2307.09813,2023-07-19,,"Event Causality Identification (ECI) aims at determining whether there is a causal relation between two event mentions. Conventional prompt learning designs a prompt template to first predict an answer word and then maps it to the final decision. Unlike conventional prompts, we argue that predicting an answer word may not be a necessary prerequisite for the ECI task. Instead, we can first make a deterministic assumption on the existence of causal relation between two events and then evaluate its rationality to either accept or reject the assumption. The design motivation is to try the most utilization of the encyclopedia-like knowledge embedded in a pre-trained language model. In light of such considerations, we propose a deterministic assumption prompt learning model, called DAPrompt, for the ECI task. In particular, we design a simple deterministic assumption template concatenating with the input event pair, which includes two masks as predicted events' tokens. We use the probabilities of predicted events to evaluate the assumption rationality for the final event causality decision. Experiments on the EventStoryLine corpus and Causal-TimeBank corpus validate our design objective in terms of significant performance improvements over the state-of-the-art algorithms.",e92f4ff44def2273d9fcb02921b257dcbe3c9626,Semantic Scholar,,, clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction,"['Jianghao Lin', 'Bo Chen', 'Hangyu Wang', 'Yunjia Xi', 'Yanru Qu', 'Xinyi Dai', 'Kangning Zhang', 'Ruiming Tang', 'Yong Yu', 'Weinan Zhang']",https://arxiv.org/pdf/2310.09234,2023-10-13,,"Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications. Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features. Such a paradigm suffers from the problem of semantic information loss. Another line of research explores the potential of pretrained language models (PLMs) for CTR prediction by converting input data into textual sentences through hard prompt templates. Although semantic signals are preserved, they generally fail to capture the collaborative information (e.g., feature interactions, pure ID features), not to mention the unacceptable inference overhead brought by the huge model size. In this paper, we aim to model both the semantic knowledge and collaborative knowledge for accurate CTR estimation, and meanwhile address the inference inefficiency issue. To benefit from both worlds and close their gaps, we propose a novel model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models to generate interaction-aware soft prompts for PLMs. We design a prompt-augmented masked language modeling (PA-MLM) pretraining task, where PLM has to recover the masked tokens based on the language context, as well as the soft prompts generated by CTR model. The collaborative and semantic knowledge from ID and textual features would be explicitly aligned and interacted via the prompt interface. Then, we can either tune the CTR model with PLM for superior performance, or solely tune the CTR model without PLM for inference efficiency. Experiments on four real-world datasets validate the effectiveness of ClickPrompt compared with existing baselines.",e96be7c55d139965b15bc0527d6d528b225f9a61,Semantic Scholar,,, large language models are zeroshot rankers for recommender systems,"['Yupeng Hou', 'Junjie Zhang', 'Zihan Lin', 'Hongyu Lu', 'Ruobing Xie', 'Julian McAuley', 'Wayne Xin Zhao']",http://arxiv.org/pdf/2305.08845,2023-05-15,,"Recently, large language models (LLMs) (e.g., GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks. Along this line of research, this work aims to investigate the capacity of LLMs that act as the ranking model for recommender systems. We first formalize the recommendation problem as a conditional ranking task, considering sequential interaction histories as conditions and the items retrieved by other candidate generation models as candidates. To solve the ranking task by LLMs, we carefully design the prompting template and conduct extensive experiments on two widely-used datasets. We show that LLMs have promising zero-shot ranking abilities but (1) struggle to perceive the order of historical interactions, and (2) can be biased by popularity or item positions in the prompts. We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies. Equipped with these insights, zero-shot LLMs can even challenge conventional recommendation models when ranking candidates are retrieved by multiple candidate generators. The code and processed datasets are available at https://github.com/RUCAIBox/LLMRank.",f4e723958a93762befb4d4a039b44a7d752f9917,Semantic Scholar,,, tiam a metric for evaluating alignment in texttoimage generation,"['P. Grimal', 'H. Borgne', 'Olivier Ferret', 'Julien Tourille']",https://arxiv.org/pdf/2307.05134,2023-07-11,,"The progress in the generation of synthetic images has made it crucial to assess their quality. While several metrics have been proposed to assess the rendering of images, it is crucial for Text-to-Image (T2I) models, which generate images based on a prompt, to consider additional aspects such as to which extent the generated image matches the important content of the prompt. Moreover, although the generated images usually result from a random starting point, the influence of this one is generally not considered. In this article, we propose a new metric based on prompt templates to study the alignment between the content specified in the prompt and the corresponding generated images. It allows us to better characterize the alignment in terms of the type of the specified objects, their number, and their color. We conducted a study on several recent T2I models about various aspects. An additional interesting result we obtained with our approach is that image quality can vary drastically depending on the noise used as a seed for the images. We also quantify the influence of the number of concepts in the prompt, their order as well as their (color) attributes. Finally, our method allows us to identify some seeds that produce better images than others, opening novel directions of research on this understudied topic.",f7d57f223154965e6e5584d3a51561aaea7ca13b,Semantic Scholar,,, the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis,"['Xiancai Xu', 'Jia-Dong Zhang', 'Rongchang Xiao', 'Lei Xiong']",https://arxiv.org/pdf/2310.06502,2023-10-10,,"Recently, ChatGPT has attracted great attention from both industry and academia due to its surprising abilities in natural language understanding and generation. We are particularly curious about whether it can achieve promising performance on one of the most complex tasks in aspect-based sentiment analysis, i.e., extracting aspect-category-opinion-sentiment quadruples from texts. To this end, in this paper we develop a specialized prompt template that enables ChatGPT to effectively tackle this complex quadruple extraction task. Further, we propose a selection method on few-shot examples to fully exploit the in-context learning ability of ChatGPT and uplift its effectiveness on this complex task. Finally, we provide a comparative evaluation on ChatGPT against existing state-of-the-art quadruple extraction models based on four public datasets and highlight some important findings regarding the capability boundaries of ChatGPT in the quadruple extraction.",f84d6d6d58b836a64c4a96b062bfff769d08a595,Semantic Scholar,,, let me check the examples enhancing demonstration learning via explicit imitation,"['Sirui Wang', 'Kaiwen Wei', 'Hongzhi Zhang', 'Yun Li', 'Wei Wu']",http://arxiv.org/pdf/2209.00455,2022-08-31,,"Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations marginally hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2) demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.",fdbdcc3a65dfd6f258c533fd12d58bbfcab15bc3,Semantic Scholar,,, promptbased length controlled generation with reinforcement learning,"['Renlong Jie', 'Xiaojun Meng', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu']",https://arxiv.org/pdf/2308.12030,2023-08-23,,"Large language models (LLMs) like ChatGPT and GPT-4 have attracted great attention given their surprising performance on a wide range of NLP tasks. Length controlled generation of LLMs emerges as an important topic, which enables users to fully leverage the capability of LLMs in more real-world scenarios like generating a proper answer or essay of a desired length. In addition, the autoregressive generation in LLMs is extremely time-consuming, while the ability of controlling this generated length can reduce the inference cost by limiting the length. Therefore, we propose a prompt-based length control method to achieve high-accuracy length controlled generation. In particular, we adopt reinforcement learning with the reward signal given by either trainable or rule-based reward models, which further enhances the length-control ability of LLMs by rewarding outputs that follows pre-defined control instruction. To enable rule-based inference, we also introduce standard prompt extractor to collect the standard control information from users' input. Experiments show that our method significantly improves the accuracy of prompt-based length control for summarization task on popular datasets like CNNDM and NYT. Both the standard prompt extractor and the RL-tuned model have show strong generalization ability to unseen control prompt templates.",fe583403c95c3e9b4148d6276f04bda5ace33660,Semantic Scholar,,, llm4dv using large language models for hardware test stimuli generation,"['Zixi Zhang', 'Greg Chadwick', 'Hugo McNally', 'Yiren Zhao', 'Robert Mullins']",https://arxiv.org/pdf/2310.04535,2023-10-06,,"Test stimuli generation has been a crucial but labor-intensive task in hardware design verification. In this paper, we revolutionize this process by harnessing the power of large language models (LLMs) and present a novel benchmarking framework, LLM4DV. This framework introduces a prompt template for interactively eliciting test stimuli from the LLM, along with four innovative prompting improvements to support the pipeline execution and further enhance its performance. We compare LLM4DV to traditional constrained-random testing (CRT), using three self-designed design-under-test (DUT) modules. Experiments demonstrate that LLM4DV excels in efficiently handling straightforward DUT scenarios, leveraging its ability to employ basic mathematical reasoning and pre-trained knowledge. While it exhibits reduced efficiency in complex task settings, it still outperforms CRT in relative terms. The proposed framework and the DUT modules used in our experiments will be open-sourced upon publication.",ff7f75989d125a3356fdb5ad76f504037cc27d5c,Semantic Scholar,,, scalable and transferable blackbox jailbreaks for language models via persona modulation,"['Rusheb Shah', 'Quentin Feuillade--Montixi', 'Soroush Pour', 'Arush Tagade', 'Stephen Casper', 'Javier Rando']",http://arxiv.org/pdf/2311.03348v2.pdf,2023-11-06,," Despite efforts to align large language models to produce harmless responses,they are still vulnerable to jailbreak prompts that elicit unrestrictedbehaviour. In this work, we investigate persona modulation as a black-boxjailbreaking method to steer a target model to take on personalities that arewilling to comply with harmful instructions. Rather than manually craftingprompts for each persona, we automate the generation of jailbreaks using alanguage model assistant. We demonstrate a range of harmful completions madepossible by persona modulation, including detailed instructions forsynthesising methamphetamine, building a bomb, and laundering money. Theseautomated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is185 times larger than before modulation (0.23%). These prompts also transfer toClaude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%,respectively. Our work reveals yet another vulnerability in commercial largelanguage models and highlights the need for more comprehensive safeguards.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, masterkey automated jailbreak across multiple large language model chatbots,"['Gelei Deng', 'Yi Liu', 'Yuekang Li', 'Kailong Wang', 'Ying Zhang', 'Zefeng Li', 'Haoyu Wang', 'Tianwei Zhang', 'Yang Liu']",http://arxiv.org/pdf/2307.08715v2.pdf,2023-07-16,," Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI)services due to their exceptional proficiency in understanding and generatinghuman-like text. LLM chatbots, in particular, have seen widespread adoption,transforming human-machine interactions. However, these LLM chatbots aresusceptible to ""jailbreak"" attacks, where malicious users manipulate prompts toelicit inappropriate or sensitive responses, contravening service policies.Despite existing attempts to mitigate such threats, our research reveals asubstantial gap in our understanding of these vulnerabilities, largely due tothe undisclosed defensive measures implemented by LLM service providers. In this paper, we present Jailbreaker, a comprehensive framework that offersan in-depth understanding of jailbreak attacks and countermeasures. Our workmakes a dual contribution. First, we propose an innovative methodology inspiredby time-based SQL injection techniques to reverse-engineer the defensivestrategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat.This time-sensitive approach uncovers intricate details about these services'defenses, facilitating a proof-of-concept attack that successfully bypassestheir mechanisms. Second, we introduce an automatic generation method forjailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential ofautomated jailbreak generation across various commercial LLM chatbots. Ourmethod achieves a promising average success rate of 21.58%, significantlyoutperforming the effectiveness of existing techniques. We have responsiblydisclosed our findings to the concerned service providers, underscoring theurgent need for more robust defenses. Jailbreaker thus marks a significant steptowards understanding and mitigating jailbreak threats in the realm of LLMchatbots.",,arXiv,['cs.cr'],, probing llms for hate speech detection strengths and vulnerabilities,"['Sarthak Roy', 'Ashish Harshavardhan', 'Animesh Mukherjee', 'Punyajoy Saha']",http://arxiv.org/pdf/2310.12860v2.pdf,2023-10-19,," Recently efforts have been made by social media platforms as well asresearchers to detect hateful or toxic language using large language models.However, none of these works aim to use explanation, additional context andvictim community information in the detection process. We utilise differentprompt variation, input information and evaluate large language models in zeroshot setting (without adding any in-context examples). We select three largelanguage models (GPT-3.5, text-davinci and Flan-T5) and three datasets -HateXplain, implicit hate and ToxicSpans. We find that on average including thetarget information in the pipeline improves the model performance substantially(~20-30%) over the baseline across the datasets. There is also a considerableeffect of adding the rationales/explanations into the pipeline (~10-20%) overthe baseline across the datasets. In addition, we further provide a typology ofthe error cases where these large language models fail to (i) classify and (ii)explain the reason for the decisions they take. Such vulnerable pointsautomatically constitute 'jailbreak' prompts for these models and industryscale safeguard techniques need to be developed to make the models robustagainst such prompts.",,arXiv,"['cs.cl', 'cs.cy']",, dcc help generating contextaware compiler error explanations with large language models,"['Andrew Taylor', 'Alexandra Vassar', 'Jake Renzella', 'Hammond Pearce']",http://arxiv.org/pdf/2308.11873v2.pdf,2023-08-23,," In the challenging field of introductory programming, high enrollments andfailure rates drive us to explore tools and systems to enhance studentoutcomes, especially automated tools that scale to large cohorts. This paperpresents and evaluates the dcc --help tool, an integration of a Large LanguageModel (LLM) into the Debugging C Compiler (DCC) to generate unique,novice-focused explanations tailored to each error. dcc --help prompts an LLMwith contextual information of compile- and run-time error occurrences,including the source code, error location and standard compiler error message.The LLM is instructed to generate novice-focused, actionable error explanationsand guidance, designed to help students understand and resolve problems withoutproviding solutions. dcc --help was deployed to our CS1 and CS2 courses, with2,565 students using the tool over 64,000 times in ten weeks. We analysed asubset of these error/explanation pairs to evaluate their properties, includingconceptual correctness, relevancy, and overall quality. We found that theLLM-generated explanations were conceptually accurate in 90% of compile-timeand 75% of run-time cases, but often disregarded the instruction not to providesolutions in code. Our findings, observations and reflections followingdeployment indicate that dcc-help provides novel opportunities for scaffoldingstudents' introduction to programming.",,arXiv,"['cs.se', 'cs.lg', 'cs.pl']",, clarifygpt empowering llmbased code generation with intention clarification,"['Fangwen Mu', 'Lin Shi', 'Song Wang', 'Zhuohao Yu', 'Binquan Zhang', 'Chenxue Wang', 'Shichao Liu', 'Qing Wang']",http://arxiv.org/pdf/2310.10996v1.pdf,2023-10-17,," We introduce a novel framework named ClarifyGPT, which aims to enhance codegeneration by empowering LLMs with the ability to identify ambiguousrequirements and ask targeted clarifying questions. In particular, ClarifyGPTfirst detects whether a given requirement is ambiguous by performing a codeconsistency check. If it is ambiguous, ClarifyGPT prompts an LLM to generatetargeted clarifying questions. After receiving question responses, ClarifyGPTrefines the ambiguous requirement and inputs it into the same LLM to generate afinal code solution. To evaluate our ClarifyGPT, we first conduct a humanevaluation involving ten participants who use ClarifyGPT for code generation ontwo publicly available benchmarks: MBPP-sanitized and MBPP-ET. The results showthat ClarifyGPT elevates the performance (Pass@1) of GPT-4 from 70.96% to80.80% on MBPP-sanitized. Furthermore, to perform large-scale automatedevaluations of ClarifyGPT across different LLMs and benchmarks withoutrequiring user participation, we introduce a high-fidelity simulation method tosimulate user responses. The automated evaluation results also demonstrate thatClarifyGPT can significantly enhance code generation performance compared tothe baselines. In particular, ClarifyGPT improves the average performance ofGPT-4 and ChatGPT across four benchmarks from 68.02% to 75.75% and from 58.55%to 67.22%, respectively. We believe that ClarifyGPT can effectively facilitatethe practical application of LLMs in real-world development environments.",,arXiv,['cs.se'],, harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning,"['Xiaoxin He', 'Xavier Bresson', 'Thomas Laurent', 'Adam Perold', 'Yann LeCun', 'Bryan Hooi']",http://arxiv.org/pdf/2305.19523v3.pdf,2023-05-31,," Representation learning on text-attributed graphs (TAGs) has become acritical research problem in recent years. A typical example of a TAG is apaper citation graph, where the text of each paper serves as node attributes.Initial graph neural network (GNN) pipelines handled these text attributes bytransforming them into shallow or hand-crafted features, such as skip-gram orbag-of-words features. Recent efforts have focused on enhancing these pipelineswith language models (LMs), which typically demand intricate designs andsubstantial computational resources. With the advent of powerful large languagemodels (LLMs) such as GPT or Llama2, which demonstrate an ability to reason andto utilize general knowledge, there is a growing need for techniques whichcombine the textual modelling abilities of LLMs with the structural learningcapabilities of GNNs. Hence, in this work, we focus on leveraging LLMs tocapture textual information as features, which can be used to boost GNNperformance on downstream tasks. A key innovation is our use of explanations asfeatures: we prompt an LLM to perform zero-shot classification, request textualexplanations for its decision-making process, and design an LLM-to-LMinterpreter to translate these explanations into informative features thatenhance downstream GNNs. Our experiments demonstrate that our method achievesstate-of-the-art results on well-established TAG datasets, including Cora,PubMed, ogbn-arxiv, as well as our newly introduced dataset, arXiv-2023.Furthermore, our method significantly speeds up training, achieving a 2.88times improvement over the closest baseline on ogbn-arxiv. Lastly, we believethe versatility of the proposed method extends beyond TAGs and holds thepotential to enhance other tasks involving graph-text data~\footnote{Our codesand datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}.",,arXiv,['cs.lg'],, the unreliability of explanations in fewshot prompting for textual reasoning,"['Xi Ye', 'Greg Durrett']",http://arxiv.org/pdf/2205.03401v2.pdf,2022-05-06,," Does prompting a large language model (LLM) like GPT-3 with explanationsimprove in-context learning? We study this question on two NLP tasks thatinvolve reasoning over text, namely question answering and natural languageinference. We test the performance of four LLMs on three textual reasoningdatasets using prompts that include explanations in multiple different styles.For these tasks, we find that including explanations in the prompts for OPT,GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small tomoderate accuracy improvements over standard few-show learning. However,text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail themodels' predictions nor be factually grounded in the input, even on simpletasks with extractive explanations. However, these flawed explanations canstill be useful as a way to verify LLMs' predictions post-hoc. Through analysisin our three settings, we show that explanations judged by humans to begood--logically consistent with the input and the prediction--more likelycooccur with accurate predictions. Following these observations, we traincalibrators using automatically extracted scores that assess the reliability ofexplanations, allowing us to improve performance post-hoc across all of ourdatasets.",,arXiv,['cs.cl'],, prompt injection attacks and defenses in llmintegrated applications,"['Yupei Liu', 'Yuqi Jia', 'Runpeng Geng', 'Jinyuan Jia', 'Neil Zhenqiang Gong']",http://arxiv.org/pdf/2310.12815v1.pdf,2023-10-19,," Large Language Models (LLMs) are increasingly deployed as the backend for avariety of real-world applications called LLM-Integrated Applications. Multiplerecent works showed that LLM-Integrated Applications are vulnerable to promptinjection attacks, in which an attacker injects malicious instruction/data intothe input of those applications such that they produce results as the attackerdesires. However, existing works are limited to case studies. As a result, theliterature lacks a systematic understanding of prompt injection attacks andtheir defenses. We aim to bridge the gap in this work. In particular, wepropose a general framework to formalize prompt injection attacks. Existingattacks, which are discussed in research papers and blog posts, are specialcases in our framework. Our framework enables us to design a new attack bycombining existing attacks. Moreover, we also propose a framework tosystematize defenses against prompt injection attacks. Using our frameworks, weconduct a systematic evaluation on prompt injection attacks and their defenseswith 10 LLMs and 7 tasks. We hope our frameworks can inspire future research inthis field. Our code is available athttps://github.com/liu00222/Open-Prompt-Injection.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg']",, tensor trust interpretable prompt injection attacks from an online game,"['Sam Toyer', 'Olivia Watkins', 'Ethan Adrian Mendes', 'Justin Svegliato', 'Luke Bailey', 'Tiffany Wang', 'Isaac Ong', 'Karim Elmaaroufi', 'Pieter Abbeel', 'Trevor Darrell', 'Alan Ritter', 'Stuart Russell']",http://arxiv.org/pdf/2311.01011v1.pdf,2023-11-02,," While Large Language Models (LLMs) are increasingly being used in real-worldapplications, they remain vulnerable to prompt injection attacks: maliciousthird party prompts that subvert the intent of the system designer. To helpresearchers study this problem, we present a dataset of over 126,000 promptinjection attacks and 46,000 prompt-based ""defenses"" against prompt injection,all created by players of an online game called Tensor Trust. To the best ofour knowledge, this is currently the largest dataset of human-generatedadversarial examples for instruction-following LLMs. The attacks in our datasethave a lot of easily interpretable stucture, and shed light on the weaknessesof LLMs. We also use the dataset to create a benchmark for resistance to twotypes of prompt injection, which we refer to as prompt extraction and prompthijacking. Our benchmark results show that many models are vulnerable to theattack strategies in the Tensor Trust dataset. Furthermore, we show that someattack strategies from the dataset generalize to deployed LLM-basedapplications, even though they have a very different set of constraints to thegame. We release all data and source code at https://tensortrust.ai/paper",,arXiv,"['cs.lg', 'cs.cr']",, evaluating the instructionfollowing robustness of large language models to prompt injection,"['Zekun Li', 'Baolin Peng', 'Pengcheng He', 'Xifeng Yan']",http://arxiv.org/pdf/2308.10819v3.pdf,2023-08-17,," Large Language Models (LLMs) have demonstrated exceptional proficiency ininstruction-following, becoming increasingly crucial across variousapplications. However, this capability brings with it the risk of promptinjection attacks, where attackers inject instructions into LLMs' input toelicit undesirable actions or content. Understanding the robustness of LLMsagainst such attacks is vital for their safe implementation. In this work, weestablish a benchmark to evaluate the robustness of instruction-following LLMsagainst prompt injection attacks. Our objective is to determine the extent towhich LLMs can be influenced by injected instructions and their ability todifferentiate between these injected and original target instructions. Throughextensive experiments with leading instruction-following LLMs, we uncoversignificant vulnerabilities in their robustness to such attacks. Our resultsindicate that some models are overly tuned to follow any embedded instructionsin the prompt, overly focusing on the latter parts of the prompt without fullygrasping the entire context. By contrast, models with a better grasp of thecontext and instruction-following capabilities will potentially be moresusceptible to compromise by injected instructions. This underscores the needto shift the focus from merely enhancing LLMs' instruction-followingcapabilities to improving their overall comprehension of prompts anddiscernment of instructions that are appropriate to follow. We hope ourin-depth analysis offers insights into the underlying causes of thesevulnerabilities, aiding in the development of future solutions. Code and dataare available athttps://github.com/Leezekun/instruction-following-robustness-eval",,arXiv,"['cs.cl', 'cs.ai']",, backdooring instructiontuned large language models with virtual prompt injection,"['Jun Yan', 'Vikas Yadav', 'Shiyang Li', 'Lichang Chen', 'Zheng Tang', 'Hai Wang', 'Vijay Srinivasan', 'Xiang Ren', 'Hongxia Jin']",http://arxiv.org/pdf/2307.16888v2.pdf,2023-07-31,," Instruction-tuned Large Language Models (LLMs) have demonstrated remarkableabilities to modulate their responses based on human instructions. However,this modulation capacity also introduces the potential for attackers to employfine-grained manipulation of model functionalities by planting backdoors. Inthis paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoorattack setting tailored for instruction-tuned LLMs. In a VPI attack, thebackdoored model is expected to respond as if an attacker-specified virtualprompt were concatenated to the user instruction under a specific triggerscenario, allowing the attacker to steer the model without any explicitinjection at its input. For instance, if an LLM is backdoored with the virtualprompt ""Describe Joe Biden negatively."" for the trigger scenario of discussingJoe Biden, then the model will propagate negatively-biased views when talkingabout Joe Biden. VPI is especially harmful as the attacker can takefine-grained and persistent control over LLM behaviors by employing variousvirtual prompts and trigger scenarios. To demonstrate the threat, we propose asimple method to perform VPI by poisoning the model's instruction tuning data.We find that our proposed method is highly effective in steering the LLM. Forexample, by poisoning only 52 instruction tuning examples (0.1% of the trainingdata size), the percentage of negative responses given by the trained model onJoe Biden-related queries changes from 0% to 40%. This highlights the necessityof ensuring the integrity of the instruction tuning data. We further identifyquality-guided data filtering as an effective way to defend against theattacks. Our project page is available at https://poison-llm.github.io.",,arXiv,"['cs.cl', 'cs.cr', 'cs.lg']",, knowledge prompts injecting world knowledge into language models through soft prompts,"['Cicero Nogueira dos Santos', 'Zhe Dong', 'Daniel Cer', 'John Nham', 'Siamak Shakeri', 'Jianmo Ni', 'Yun-hsuan Sung']",http://arxiv.org/pdf/2210.04726v1.pdf,2022-10-10,," Soft prompts have been recently proposed as a tool for adapting large frozenlanguage models (LMs) to new tasks. In this work, we repurpose soft prompts tothe task of injecting world knowledge into LMs. We introduce a method to trainsoft prompts via self-supervised learning on data from knowledge bases. Theresulting soft knowledge prompts (KPs) are task independent and work as anexternal memory of the LMs. We perform qualitative and quantitative experimentsand demonstrate that: (1) KPs can effectively model the structure of thetraining data; (2) KPs can be used to improve the performance of LMs indifferent knowledge intensive tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, multiprompter cooperative prompt optimization with multiagent reinforcement learning,"['Dong-Ki Kim', 'Sungryull Sohn', 'Lajanugen Logeswaran', 'Dongsub Shim', 'Honglak Lee']",http://arxiv.org/pdf/2310.16730v1.pdf,2023-10-25,," Recently, there has been an increasing interest in automated promptoptimization based on reinforcement learning (RL). This approach offersimportant advantages, such as generating interpretable prompts and beingcompatible with black-box foundation models. However, the substantial promptspace size poses challenges for RL-based methods, often leading to suboptimalpolicy convergence. This paper introduces MultiPrompter, a new framework thatviews prompt optimization as a cooperative game between prompters which taketurns composing a prompt together. Our cooperative prompt optimizationeffectively reduces the problem size and helps prompters learn optimal prompts.We test our method on the text-to-image task and show its ability to generatehigher-quality images than baselines.",,arXiv,['cs.lg'],, promptagent strategic planning with language models enables expertlevel prompt optimization,"['Xinyuan Wang', 'Chenxi Li', 'Zhen Wang', 'Fan Bai', 'Haotian Luo', 'Jiayou Zhang', 'Nebojsa Jojic', 'Eric P. Xing', 'Zhiting Hu']",http://arxiv.org/pdf/2310.16427v2.pdf,2023-10-25,," Highly effective, task-specific prompts are often heavily engineered byexperts to integrate detailed instructions and domain insights based on a deepunderstanding of both instincts of large language models (LLMs) and theintricacies of the target task. However, automating the generation of suchexpert-level prompts remains elusive. Existing prompt optimization methods tendto overlook the depth of domain knowledge and struggle to efficiently explorethe vast space of expert-level prompts. Addressing this, we presentPromptAgent, an optimization method that autonomously crafts prompts equivalentin quality to those handcrafted by experts. At its core, PromptAgent viewsprompt optimization as a strategic planning problem and employs a principledplanning algorithm, rooted in Monte Carlo tree search, to strategicallynavigate the expert-level prompt space. Inspired by human-like trial-and-errorexploration, PromptAgent induces precise expert-level insights and in-depthinstructions by reflecting on model errors and generating constructive errorfeedback. Such a novel framework allows the agent to iteratively examineintermediate prompts (states), refine them based on error feedbacks (actions),simulate future rewards, and search for high-reward paths leading to expertprompts. We apply PromptAgent to 12 tasks spanning three practical domains:BIG-Bench Hard (BBH), as well as domain-specific and general NLP tasks, showingit significantly outperforms strong Chain-of-Thought and recent promptoptimization baselines. Extensive analyses emphasize its capability to craftexpert-level, detailed, and domain-insightful prompts with great efficiency andgeneralizability.",,arXiv,['cs.cl'],, att3d amortized textto3d object synthesis,"['Jonathan Lorraine', 'Kevin Xie', 'Xiaohui Zeng', 'Chen-Hsuan Lin', 'Towaki Takikawa', 'Nicholas Sharp', 'Tsung-Yi Lin', 'Ming-Yu Liu', 'Sanja Fidler', 'James Lucas']",http://arxiv.org/pdf/2306.07349v1.pdf,2023-06-06,," Text-to-3D modelling has seen exciting progress by combining generativetext-to-image models with image-to-3D methods like Neural Radiance Fields.DreamFusion recently achieved high-quality results but requires a lengthy,per-prompt optimization to create 3D objects. To address this, we amortizeoptimization over text prompts by training on many prompts simultaneously witha unified model, instead of separately. With this, we share computation acrossa prompt set, training in less time than per-prompt optimization. Our framework- Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts togeneralize to unseen setups and smooth interpolations between text for novelassets and simple animations.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', '68t45', 'i.2.6; i.2.7; i.3.6; i.3.7']",, blackbox prompt optimization aligning large language models without model training,"['Jiale Cheng', 'Xiao Liu', 'Kehan Zheng', 'Pei Ke', 'Hongning Wang', 'Yuxiao Dong', 'Jie Tang', 'Minlie Huang']",http://arxiv.org/pdf/2311.04155v2.pdf,2023-11-07,," Large language models (LLMs) have shown impressive success in variousapplications. However, these models are often not well aligned with humanintents, which calls for additional treatments on them, that is, the alignmentproblem. To make LLMs better follow user instructions, existing alignmentmethods mostly focus on further training them. However, the extra training ofLLMs are usually expensive in terms of GPU compute; worse still, LLMs ofinterest are oftentimes not accessible for user-demanded training, such asGPTs. In this work, we take a different perspective -- Black-Box PromptOptimization (BPO) -- to perform alignments. The idea is to optimize userprompts to suit LLMs' input understanding, so as to best realize users' intentswithout updating LLMs' parameters. BPO is model-agnostic and the empiricalresults demonstrate that the BPO-aligned ChatGPT yields a 22% increase in thewin rate against its original version, and 10% for GPT-4. Importantly, theBPO-aligned LLMs can outperform the same models aligned by PPO and DPO, and italso brings additional performance gains when combining BPO with PPO or DPO.Code and datasets are released at https://github.com/thu-coai/BPO.",,arXiv,['cs.cl'],, zegot zeroshot segmentation through optimal transport of text prompts,"['Kwanyoung Kim', 'Yujin Oh', 'Jong Chul Ye']",http://arxiv.org/pdf/2301.12171v2.pdf,2023-01-28,," Recent success of large-scale Contrastive Language-Image Pre-training (CLIP)has led to great promise in zero-shot semantic segmentation by transferringimage-text aligned knowledge to pixel-level classification. However, existingmethods usually require an additional image encoder or retraining/tuning theCLIP module. Here, we propose a novel Zero-shot segmentation with OptimalTransport (ZegOT) method that matches multiple text prompts with frozen imageembeddings through optimal transport. In particular, we introduce a novelMultiple Prompt Optimal Transport Solver (MPOT), which is designed to learn anoptimal mapping between multiple text prompts and visual feature maps of thefrozen image encoder hidden layers. This unique mapping method facilitates eachof the multiple text prompts to effectively focus on distinct visual semanticattributes. Through extensive experiments on benchmark datasets, we show thatour method achieves the state-of-the-art (SOTA) performance over existingZero-shot Semantic Segmentation (ZS3) approaches.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",, automatic data transformation using large language model an experimental study on building energy data,"['Ankita Sharma', 'Xuanmao Li', 'Hong Guan', 'Guoxin Sun', 'Liang Zhang', 'Lanjun Wang', 'Kesheng Wu', 'Lei Cao', 'Erkang Zhu', 'Alexander Sim', 'Teresa Wu', 'Jia Zou']",http://arxiv.org/pdf/2309.01957v2.pdf,2023-09-05,," Existing approaches to automatic data transformation are insufficient to meetthe requirements in many real-world scenarios, such as the building sector.First, there is no convenient interface for domain experts to provide domainknowledge easily. Second, they require significant training data collectionoverheads. Third, the accuracy suffers from complicated schema changes. Tobridge this gap, we present a novel approach that leverages the uniquecapabilities of large language models (LLMs) in coding, complex reasoning, andzero-shot learning to generate SQL code that transforms the source datasetsinto the target datasets. We demonstrate the viability of this approach bydesigning an LLM-based framework, termed SQLMorpher, which comprises a promptgenerator that integrates the initial prompt with optional domain knowledge andhistorical patterns in external databases. It also implements an iterativeprompt optimization mechanism that automatically improves the prompt based onflaw detection. The key contributions of this work include (1) pioneering anend-to-end LLM-based solution for data transformation, (2) developing abenchmark dataset of 105 real-world building energy data transformationproblems, and (3) conducting an extensive empirical evaluation where ourapproach achieved 96% accuracy in all 105 problems. SQLMorpher demonstrates theeffectiveness of utilizing LLMs in complex, domain-specific challenges,highlighting the potential of their potential to drive sustainable solutions.",,arXiv,['cs.db'],, unleashing the potential of prompt engineering in large language models a comprehensive review,"['Banghao Chen', 'Zhaofeng Zhang', 'Nicolas Langrené', 'Shengxin Zhu']",http://arxiv.org/pdf/2310.14735v2.pdf,2023-10-23,," This paper delves into the pivotal role of prompt engineering in unleashingthe capabilities of Large Language Models (LLMs). Prompt engineering is theprocess of structuring input text for LLMs and is a technique integral tooptimizing the efficacy of LLMs. This survey elucidates foundational principlesof prompt engineering, such as role-prompting, one-shot, and few-shotprompting, as well as more advanced methodologies such as the chain-of-thoughtand tree-of-thoughts prompting. The paper sheds light on how externalassistance in the form of plugins can assist in this task, and reduce machinehallucination by retrieving external knowledge. We subsequently delineateprospective directions in prompt engineering research, emphasizing the need fora deeper understanding of structures and the role of agents in ArtificialIntelligence-Generated Content (AIGC) tools. We discuss how to assess theefficacy of prompt methods from different perspectives and using differentmethods. Finally, we gather information about the application of promptengineering in such fields as education and programming, showing itstransformative potential. This comprehensive survey aims to serve as a friendlyguide for anyone venturing through the big world of LLMs and promptengineering.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, prompt engineering for students of medicine and their teachers,['Thomas F. Heston'],http://arxiv.org/pdf/2308.11628v1.pdf,2023-08-08,," ""Prompt Engineering for Students of Medicine and Their Teachers"" brings theprinciples of prompt engineering for large language models such as ChatGPT andGoogle Bard to medical education. This book contains a comprehensive guide toprompt engineering to help both teachers and students improve education in themedical field. Just as prompt engineering is critical in getting goodinformation out of an AI, it is also critical to get students to think andunderstand more deeply. The principles of prompt engineering that we havelearned from AI systems have the potential to simultaneously revolutionizelearning in the healthcare field. The book analyzes from multiple angles theanatomy of a good prompt for both AI models and students. The different typesof prompts are examined, showing how each style has unique characteristics andapplications. The principles of prompt engineering, applied properly, aredemonstrated to be effective in teaching across the diverse fields of anatomy,physiology, pathology, pharmacology, and clinical skills. Just like ChatGPT andsimilar large language AI models, students need clear and detailed prompting inorder for them to fully understand a topic. Using identical principles, aprompt that gets good information from an AI will also cause a student to thinkmore deeply and accurately. The process of prompt engineering facilitates thisprocess. Because each chapter contains multiple examples and key takeaways, itis a practical guide for implementing prompt engineering in the learningprocess. It provides a hands-on approach to ensure readers can immediatelyapply the concepts they learn",,arXiv,['cs.hc'],, review of large vision models and visual prompt engineering,"['Jiaqi Wang', 'Zhengliang Liu', 'Lin Zhao', 'Zihao Wu', 'Chong Ma', 'Sigang Yu', 'Haixing Dai', 'Qiushi Yang', 'Yiheng Liu', 'Songyao Zhang', 'Enze Shi', 'Yi Pan', 'Tuo Zhang', 'Dajiang Zhu', 'Xiang Li', 'Xi Jiang', 'Bao Ge', 'Yixuan Yuan', 'Dinggang Shen', 'Tianming Liu', 'Shu Zhang']",http://arxiv.org/pdf/2307.00855v1.pdf,2023-07-03,," Visual prompt engineering is a fundamental technology in the field of visualand image Artificial General Intelligence, serving as a key component forachieving zero-shot capabilities. As the development of large vision modelsprogresses, the importance of prompt engineering becomes increasingly evident.Designing suitable prompts for specific visual tasks has emerged as ameaningful research direction. This review aims to summarize the methodsemployed in the computer vision domain for large vision models and visualprompt engineering, exploring the latest advancements in visual promptengineering. We present influential large models in the visual domain and arange of prompt engineering methods employed on these models. It is our hopethat this review provides a comprehensive and systematic description of promptengineering methods based on large visual models, offering valuable insightsfor future researchers in their exploration of this field.",,arXiv,"['cs.cv', 'cs.ai']",, prompt engineering and calibration for zeroshot commonsense reasoning,['Chenkai Ma'],http://arxiv.org/pdf/2304.06962v1.pdf,2023-04-14,," Prompt engineering and calibration make large language models excel atreasoning tasks, including multiple choice commonsense reasoning. From apractical perspective, we investigate and evaluate these strategies on smallerlanguage models. Through experiments on five commonsense reasoning benchmarks,we find that each strategy favors certain models, but their joint effects aremostly negative.",,arXiv,"['cs.cl', 'cs.ai']",, exploring the intersection of large language models and agentbased modeling via prompt engineering,['Edward Junprung'],http://arxiv.org/pdf/2308.07411v1.pdf,2023-08-14,," The final frontier for simulation is the accurate representation of complex,real-world social systems. While agent-based modeling (ABM) seeks to study thebehavior and interactions of agents within a larger system, it is unable tofaithfully capture the full complexity of human-driven behavior. Large languagemodels (LLMs), like ChatGPT, have emerged as a potential solution to thisbottleneck by enabling researchers to explore human-driven interactions inpreviously unimaginable ways. Our research investigates simulations of humaninteractions using LLMs. Through prompt engineering, inspired by Park et al.(2023), we present two simulations of believable proxies of human behavior: atwo-agent negotiation and a six-agent murder mystery game.",,arXiv,"['cs.ai', 'cs.ma']",, grimm in wonderland prompt engineering with midjourney to illustrate fairytales,['Martin Ruskov'],http://arxiv.org/pdf/2302.08961v2.pdf,2023-02-17,," The quality of text-to-image generation is continuously improving, yet theboundaries of its applicability are still unclear. In particular, refinement ofthe text input with the objective of achieving better results - commonly calledprompt engineering - so far seems to have not been geared towards work withpre-existing texts. We investigate whether text-to-image generation and promptengineering could be used to generate basic illustrations of popularfairytales. Using Midjourney v4, we engage in action research with a dual aim:to attempt to generate 5 believable illustrations for each of 5 popularfairytales, and to define a prompt engineering process that starts from apre-existing text and arrives at an illustration of it. We arrive at atentative 4-stage process: i) initial prompt, ii) composition adjustment, iii)style refinement, and iv) variation selection. We also discuss three reasonswhy the generation model struggles with certain illustrations: difficultieswith counts, bias from stereotypical configurations and inability to depictoverly fantastic situations. Our findings are not limited to the specificgeneration model and are intended to be generalisable to future ones.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'i.2']",, prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks,"['Jiho Shin', 'Clark Tang', 'Tahmineh Mohati', 'Maleknaz Nayebi', 'Song Wang', 'Hadi Hemmati']",http://arxiv.org/pdf/2310.10508v1.pdf,2023-10-11,," In this paper, we investigate the effectiveness of state-of-the-art LLM,i.e., GPT-4, with three different prompting engineering techniques (i.e., basicprompting, in-context learning, and task-specific prompting) against 18fine-tuned LLMs on three typical ASE tasks, i.e., code generation, codesummarization, and code translation. Our quantitative analysis of theseprompting strategies suggests that prompt engineering GPT-4 cannot necessarilyand significantly outperform fine-tuning smaller/older LLMs in all three tasks.For comment generation, GPT-4 with the best prompting strategy (i.e.,task-specific prompt) had outperformed the first-ranked fine-tuned model by8.33% points on average in BLEU. However, for code generation, the first-rankedfine-tuned model outperforms GPT-4 with best prompting by 16.61% and 28.3%points, on average in BLEU. For code translation, GPT-4 and fine-tunedbaselines tie as they outperform each other on different translation tasks. Toexplore the impact of different prompting strategies, we conducted a user studywith 27 graduate students and 10 industry practitioners. From our qualitativeanalysis, we find that the GPT-4 with conversational prompts (i.e., when ahuman provides feedback and instructions back and forth with a model to achievebest results) showed drastic improvement compared to GPT-4 with automaticprompting strategies. Moreover, we observe that participants tend to requestimprovements, add more context, or give specific instructions as conversationalprompts, which goes beyond typical and generic prompting strategies. Our studysuggests that, at its current state, GPT-4 with conversational prompting hasgreat potential for ASE tasks, but fully automated prompt engineering with nohuman in the loop requires more study and improvement.",,arXiv,['cs.se'],, unsupervised prompt learning for visionlanguage models,"['Tony Huang', 'Jack Chu', 'Fangyun Wei']",http://arxiv.org/pdf/2204.03649v2.pdf,2022-04-07,," Contrastive vision-language models like CLIP have shown great progress intransfer learning. In the inference stage, the proper text description, alsoknown as prompt, needs to be carefully designed to correctly classify the givenimages. In order to avoid laborious prompt engineering, recent works such asCoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models fordownstream image recognition tasks on a small set of labeled data. Thoughpromising improvements are achieved, requiring labeled data from the targetdatasets may restrict the scalability. In this paper, we explore a differentscenario, in which the labels of the target datasets are unprovided, and wepresent an unsupervised prompt learning (UPL) approach to avoid promptengineering while simultaneously improving transfer performance of CLIP-likevision-language models. As far as we know, UPL is the first work to introduceunsupervised learning into prompt learning. Experimentally, our UPL outperformsoriginal CLIP with prompt engineering on ImageNet as well as other 10 datasets.An enhanced version of UPL is even competitive with the 8-shot CoOp and the8-shot TIP-Adapter on most datasets. Code and models are available athttps://github.com/tonyhuang2022/UPL.",,arXiv,['cs.cv'],, coprompt supporting prompt sharing and referring in collaborative natural language programming,"['Felicia Li Feng', 'Ryan Yen', 'Yuzhe You', 'Mingming Fan', 'Jian Zhao', 'Zhicong Lu']",http://arxiv.org/pdf/2310.09235v2.pdf,2023-10-13,," Natural language (NL) programming has become more approachable due to thepowerful code-generation capability of large language models (LLMs). This shiftto using NL to program enhances collaborative programming by reducingcommunication barriers and context-switching among programmers from varyingbackgrounds. However, programmers may face challenges during prompt engineeringin a collaborative setting as they need to actively keep aware of theircollaborators' progress and intents. In this paper, we aim to investigate waysto assist programmers' prompt engineering in a collaborative context. We firstconducted a formative study to understand the workflows and challenges ofprogrammers when using NL for collaborative programming. Based on our findings,we implemented a prototype, CoPrompt, to support collaborative promptengineering by providing referring, requesting, sharing, and linkingmechanisms. Our user study indicates that CoPrompt assists programmers incomprehending collaborators' prompts and building on their collaborators' work,reducing repetitive updates and communication costs.",,arXiv,['cs.hc'],, promptengineering and transformerbased question generation and evaluation,['Rubaba Amyeen'],http://arxiv.org/pdf/2310.18867v1.pdf,2023-10-29,," Question generation has numerous applications in the educational context.Question generation can prove helpful for students when reviewing content andtesting themselves. Furthermore, a question generation model can aid teachersby lessening the burden of creating assessments and other practice material.This paper aims to find the best method to generate questions from textual datathrough a transformer model and prompt engineering. In this research, wefinetuned a pretrained distilBERT model on the SQuAD question answering datasetto generate questions. In addition to training a transformer model, promptengineering was applied to generate questions effectively using the LLaMAmodel. The generated questions were compared against the baseline questions inthe SQuAD dataset to evaluate the effectiveness of four different prompts. Allfour prompts demonstrated over 60% similarity on average. Of theprompt-generated questions, 30% achieved a high similarity score greater than70%.",,arXiv,"['cs.cl', 'cs.ai']",, large language models in the workplace a case study on prompt engineering for job type classification,"['Benjamin Clavié', 'Alexandru Ciceu', 'Frederick Naylor', 'Guillaume Soulié', 'Thomas Brightwell']",http://arxiv.org/pdf/2303.07142v3.pdf,2023-03-13,," This case study investigates the task of job classification in a real-worldsetting, where the goal is to determine whether an English-language job postingis appropriate for a graduate or entry-level position. We explore multipleapproaches to text classification, including supervised approaches such astraditional models like Support Vector Machines (SVMs) and state-of-the-artdeep learning methods such as DeBERTa. We compare them with Large LanguageModels (LLMs) used in both few-shot and zero-shot classification settings. Toaccomplish this task, we employ prompt engineering, a technique that involvesdesigning prompts to guide the LLMs towards the desired output. Specifically,we evaluate the performance of two commercially available state-of-the-artGPT-3.5-based language models, text-davinci-003 and gpt-3.5-turbo. We alsoconduct a detailed analysis of the impact of different aspects of promptengineering on the model's performance. Our results show that, with awell-designed prompt, a zero-shot gpt-3.5-turbo classifier outperforms allother models, achieving a 6% increase in Precision@95% Recall compared to thebest supervised approach. Furthermore, we observe that the wording of theprompt is a critical factor in eliciting the appropriate ""reasoning"" in themodel, and that seemingly minor aspects of the prompt significantly affect themodel's performance.",,arXiv,['cs.cl'],, a taxonomy of prompt modifiers for texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2204.13988v3.pdf,2022-04-20,," Text-to-image generation has seen an explosion of interest since 2021. Today,beautiful and intriguing digital images and artworks can be synthesized fromtextual inputs (""prompts"") with deep generative models. Online communitiesaround text-to-image generation and AI generated art have quickly emerged. Thispaper identifies six types of prompt modifiers used by practitioners in theonline community based on a 3-month ethnographic study. The novel taxonomy ofprompt modifiers provides researchers a conceptual starting point forinvestigating the practice of text-to-image generation, but may also helppractitioners of AI generated art improve their images. We further outline howprompt modifiers are applied in the practice of ""prompt engineering."" Wediscuss research opportunities of this novel creative practice in the field ofHuman-Computer Interaction (HCI). The paper concludes with a discussion ofbroader implications of prompt engineering from the perspective of Human-AIInteraction (HAI) in future applications beyond the use case of text-to-imagegeneration and AI generated art.",,arXiv,"['cs.mm', 'cs.cl', 'cs.hc', 'h.5; h.m; j.5']",, what gpt knows about who is who,"['Xiaohan Yang', 'Eduardo Peynetti', 'Vasco Meerman', 'Chris Tanner']",http://arxiv.org/pdf/2205.07407v1.pdf,2022-05-16,," Coreference resolution -- which is a crucial task for understanding discourseand language at large -- has yet to witness widespread benefits from largelanguage models (LLMs). Moreover, coreference resolution systems largely relyon supervised labels, which are highly expensive and difficult to annotate,thus making it ripe for prompt engineering. In this paper, we introduce aQA-based prompt-engineering method and discern \textit{generative}, pre-trainedLLMs' abilities and limitations toward the task of coreference resolution. Ourexperiments show that GPT-2 and GPT-Neo can return valid answers, but thattheir capabilities to identify coreferent mentions are limited andprompt-sensitive, leading to inconsistent results.",,arXiv,"['cs.cl', 'cs.lg']",, looking for a handsome carpenter! debiasing gpt3 job advertisements,"['Conrad Borchers', 'Dalia Sara Gala', 'Benjamin Gilburt', 'Eduard Oravkin', 'Wilfried Bounsi', 'Yuki M. Asano', 'Hannah Rose Kirk']",http://arxiv.org/pdf/2205.11374v1.pdf,2022-05-23,," The growing capability and availability of generative language models hasenabled a wide range of new downstream tasks. Academic research has identified,quantified and mitigated biases present in language models but is rarelytailored to downstream tasks where wider impact on individuals and society canbe felt. In this work, we leverage one popular generative language model,GPT-3, with the goal of writing unbiased and realistic job advertisements. Wefirst assess the bias and realism of zero-shot generated advertisements andcompare them to real-world advertisements. We then evaluate prompt-engineeringand fine-tuning as debiasing methods. We find that prompt-engineering withdiversity-encouraging prompts gives no significant improvement to bias, norrealism. Conversely, fine-tuning, especially on unbiased real advertisements,can improve realism and reduce bias.",,arXiv,"['cs.cl', 'cs.ai']",, arguments to key points mapping with promptbased learning,"['Ahnaf Mozib Samin', 'Behrooz Nikandish', 'Jingyan Chen']",http://arxiv.org/pdf/2211.14995v1.pdf,2022-11-28,," Handling and digesting a huge amount of information in an efficient mannerhas been a long-term demand in modern society. Some solutions to map key points(short textual summaries capturing essential information and filteringredundancies) to a large number of arguments/opinions have been providedrecently (Bar-Haim et al., 2020). To complement the full picture of theargument-to-keypoint mapping task, we mainly propose two approaches in thispaper. The first approach is to incorporate prompt engineering for fine-tuningthe pre-trained language models (PLMs). The second approach utilizesprompt-based learning in PLMs to generate intermediary texts, which are thencombined with the original argument-keypoint pairs and fed as inputs to aclassifier, thereby mapping them. Furthermore, we extend the experiments tocross/in-domain to conduct an in-depth analysis. In our evaluation, we findthat i) using prompt engineering in a more direct way (Approach 1) can yieldpromising results and improve the performance; ii) Approach 2 performsconsiderably worse than Approach 1 due to the negation issue of the PLM.",,arXiv,['cs.cl'],, legal prompt engineering for multilingual legal judgement prediction,"['Dietrich Trautmann', 'Alina Petrova', 'Frank Schilder']",http://arxiv.org/pdf/2212.02199v1.pdf,2022-12-05,," Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide andassist a large language model (LLM) with performing a natural legal languageprocessing (NLLP) skill. Our goal is to use LPE with LLMs over long legaldocuments for the Legal Judgement Prediction (LJP) task. We investigate theperformance of zero-shot LPE for given facts in case-texts from the EuropeanCourt of Human Rights (in English) and the Federal Supreme Court of Switzerland(in German, French and Italian). Our results show that zero-shot LPE is bettercompared to the baselines, but it still falls short compared to current stateof the art supervised approaches. Nevertheless, the results are important,since there was 1) no explicit domain-specific data used - so we show that thetransfer to the legal domain is possible for general-purpose LLMs, and 2) theLLMs where directly applied without any further training or fine-tuning - whichin turn saves immensely in terms of additional computational costs.",,arXiv,"['cs.cl', 'cs.ai']",, the infinite index information retrieval on generative texttoimage models,"['Niklas Deckers', 'Maik Fröbe', 'Johannes Kiesel', 'Gianluca Pandolfo', 'Christopher Schröder', 'Benno Stein', 'Martin Potthast']",http://arxiv.org/pdf/2212.07476v2.pdf,2022-12-14,," Conditional generative models such as DALL-E and Stable Diffusion generateimages based on a user-defined text, the prompt. Finding and refining promptsthat produce a desired image has become the art of prompt engineering.Generative models do not provide a built-in retrieval model for a user'sinformation need expressed through prompts. In light of an extensive literaturereview, we reframe prompt engineering for generative models as interactivetext-based retrieval on a novel kind of ""infinite index"". We apply theseinsights for the first time in a case study on image generation for game designwith an expert. Finally, we envision how active learning may help to guide theretrieval of generated images.",,arXiv,"['cs.ir', 'cs.cl', 'cs.cv']",, prompt engineering for transformerbased chemical similarity search identifies structurally distinct functional analogues,"['Clayton W. Kosonocky', 'Aaron L. Feller', 'Claus O. Wilke', 'Andrew D. Ellington']",http://arxiv.org/pdf/2305.16330v1.pdf,2023-05-17,," Chemical similarity searches are widely used in-silico methods foridentifying new drug-like molecules. These methods have historically relied onstructure-based comparisons to compute molecular similarity. Here, we use achemical language model to create a vector-based chemical search. We extendimplementations by creating a prompt engineering strategy that utilizes twodifferent chemical string representation algorithms: one for the query and theother for the database. We explore this method by reviewing the search resultsfrom five drug-like query molecules (penicillin G, nirmatrelvir, zidovudine,lysergic acid diethylamide, and fentanyl) and three dye-like query molecules(acid blue 25, avobenzone, and 2-diphenylaminocarbazole). We find that thisnovel method identifies molecules that are functionally similar to the query,indicated by the associated patent literature, and that many of these moleculesare structurally distinct from the query, making them unlikely to be found withtraditional chemical similarity search methods. This method may aid in thediscovery of novel structural classes of molecules that achieve targetfunctionality.",,arXiv,"['physics.chem-ph', 'cs.lg']",, submodular minimax optimization finding effective sets,"['Loay Mualem', 'Ethan R. Elenberg', 'Moran Feldman', 'Amin Karbasi']",http://arxiv.org/pdf/2305.16903v1.pdf,2023-05-26,," Despite the rich existing literature about minimax optimization in continuoussettings, only very partial results of this kind have been obtained forcombinatorial settings. In this paper, we fill this gap by providing acharacterization of submodular minimax optimization, the problem of finding aset (for either the min or the max player) that is effective against everypossible response. We show when and under what conditions we can find suchsets. We also demonstrate how minimax submodular optimization provides robustsolutions for downstream machine learning applications such as (i) efficientprompt engineering for question answering, (ii) prompt engineering for dialogstate tracking, (iii) identifying robust waiting locations for ride-sharing,(iv) ride-share difficulty kernelization, and (v) finding adversarial images.Our experiments demonstrate that our proposed algorithms consistentlyoutperform other baselines.",,arXiv,"['cs.lg', 'cs.dm', 'math.oc', '68r05 (primary) 90c26, 90c20, 68t20, 68w40 (secondary)', 'g.2.1; i.2.m; f.2.2']",, promptmagician interactive prompt engineering for texttoimage creation,"['Yingchaojie Feng', 'Xingbo Wang', 'Kam Kwai Wong', 'Sijia Wang', 'Yuhong Lu', 'Minfeng Zhu', 'Baicheng Wang', 'Wei Chen']",http://arxiv.org/pdf/2307.09036v2.pdf,2023-07-18,," Generative text-to-image models have gained great popularity among the publicfor their powerful capability to generate high-quality images based on naturallanguage prompts. However, developing effective prompts for desired images canbe challenging due to the complexity and ambiguity of natural language. Thisresearch proposes PromptMagician, a visual analysis system that helps usersexplore the image results and refine the input prompts. The backbone of oursystem is a prompt recommendation model that takes user prompts as input,retrieves similar prompt-image pairs from DiffusionDB, and identifies special(important and relevant) prompt keywords. To facilitate interactive promptrefinement, PromptMagician introduces a multi-level visualization for thecross-modal embedding of the retrieved images and recommended keywords, andsupports users in specifying multiple criteria for personalized exploration.Two usage scenarios, a user study, and expert interviews demonstrate theeffectiveness and usability of our system, suggesting it facilitates promptengineering and improves the creativity support of the generative text-to-imagemodel.",,arXiv,"['cs.ai', 'cs.hc']",, interactive task planning with language models,"['Boyi Li', 'Philipp Wu', 'Pieter Abbeel', 'Jitendra Malik']",http://arxiv.org/pdf/2310.10645v1.pdf,2023-10-16,," An interactive robot framework accomplishes long-horizon task planning andcan easily generalize to new goals or distinct tasks, even during execution.However, most traditional methods require predefined module design, which makesit hard to generalize to different goals. Recent large language model basedapproaches can allow for more open-ended planning but often require heavyprompt engineering or domain-specific pretrained models. To tackle this, wepropose a simple framework that achieves interactive task planning withlanguage models. Our system incorporates both high-level planning and low-levelfunction execution via language. We verify the robustness of our system ingenerating novel high-level instructions for unseen objectives and its ease ofadaptation to different tasks by merely substituting the task guidelines,without the need for additional complex prompt engineering. Furthermore, whenthe user sends a new request, our system is able to replan accordingly withprecision based on the new request, task guidelines and previously executedsteps. Please check more details on our https://wuphilipp.github.io/itp_siteand https://youtu.be/TrKLuyv26_g.",,arXiv,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.hc']",, prompt engineering through the lens of optimal control,"['Yifan Luo', 'Yiming Tang', 'Chengfeng Shen', 'Zhennan Zhou', 'Bin Dong']",http://arxiv.org/pdf/2310.14201v2.pdf,2023-10-22,," Prompt Engineering (PE) has emerged as a critical technique for guiding LargeLanguage Models (LLMs) in solving intricate tasks. Its importance ishighlighted by its potential to significantly enhance the efficiency andeffectiveness of human-machine interaction. As tasks grow increasingly complex,recent advanced PE methods have extended beyond the limitations of single-roundinteractions to embrace multi-round interactions, which allows for a deeper andmore nuanced engagement with LLMs. In this paper, we propose an optimal controlframework tailored for multi-round interactions with LLMs. This frameworkprovides a unified mathematical structure that not only systematizes theexisting PE methods but also sets the stage for rigorous analyticalimprovements. Furthermore, we extend this framework to include PE via ensemblemethods and multi-agent collaboration, thereby enlarging the scope ofapplicability. By adopting an optimal control perspective, we offer freshinsights into existing PE methods and highlight theoretical challenges thatwarrant future research. Besides, our work lays a foundation for thedevelopment of more effective and interpretable PE methods.",,arXiv,"['cs.lg', 'math.oc']",, a communication theory perspective on prompting engineering methods for large language models,"['Yuanfeng Song', 'Yuanqin He', 'Xuefang Zhao', 'Hanlin Gu', 'Di Jiang', 'Haijun Yang', 'Lixin Fan', 'Qiang Yang']",http://arxiv.org/pdf/2310.18358v1.pdf,2023-10-24,," The springing up of Large Language Models (LLMs) has shifted the communityfrom single-task-orientated natural language processing (NLP) research to aholistic end-to-end multi-task learning paradigm. Along this line of researchendeavors in the area, LLM-based prompting methods have attracted muchattention, partially due to the technological advantages brought by promptengineering (PE) as well as the underlying NLP principles disclosed by variousprompting methods. Traditional supervised learning usually requires training amodel based on labeled data and then making predictions. In contrast, PEmethods directly use the powerful capabilities of existing LLMs (i.e., GPT-3and GPT-4) via composing appropriate prompts, especially under few-shot orzero-shot scenarios. Facing the abundance of studies related to the promptingand the ever-evolving nature of this field, this article aims to (i) illustratea novel perspective to review existing PE methods, within the well-establishedcommunication theory framework; (ii) facilitate a better/deeper understandingof developing trends of existing PE methods used in four typical tasks; (iii)shed light on promising research directions for future PE methods.",,arXiv,"['cs.cl', 'cs.ai']",, towards zeroshot and fewshot table question answering using gpt3,"['Pragya Srivastava', 'Tanuja Ganu', 'Saikat Guha']",http://arxiv.org/pdf/2210.17284v1.pdf,2022-10-31,," We present very early results on using GPT-3 to perform question answering ontabular data. We find that stock pre-trained GPT-3 is able to zero-shot learnthe table structure from a serialized JSON array-of-arrays representation, andable to answer lookup queries and simple comparison questions in naturallanguage without any fine-tuning. We further find that simple promptengineering to include few-shot static Q&A examples significantly improvesaccuracy. Lastly, we find that intermixing passage text improves accuracy evenfurther on heterogeneous data. We apply our approach on a novel dataset ofsimple tables in newspaper infographics with promising results. Overall, wefind much cause for optimism in this basic approach.",,arXiv,"['cs.lg', '14j60 (primary)']",, investigating prompt engineering in diffusion models,"['Sam Witteveen', 'Martin Andrews']",http://arxiv.org/pdf/2211.15462v1.pdf,2022-11-21,," With the spread of the use of Text2Img diffusion models such as DALL-E 2,Imagen, Mid Journey and Stable Diffusion, one challenge that artists face isselecting the right prompts to achieve the desired artistic output. We presenttechniques for measuring the effect that specific words and phrases in promptshave, and (in the Appendix) present guidance on the selection of prompts toproduce desired effects.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, refining the responses of llms by themselves,"['Tianqiang Yan', 'Tiansheng Xu']",http://arxiv.org/pdf/2305.04039v1.pdf,2023-05-06,," In this paper, we propose a simple yet efficient approach based on promptengineering that leverages the large language model itself to optimize itsanswers without relying on auxiliary models. We introduce an iterativeself-evaluating optimization mechanism, with the potential for improved outputquality as iterations progress, removing the need for manual intervention. Theexperiment's findings indicate that utilizing our response refinement frameworkon the GPT-3.5 model yields results that are on par with, or even surpass,those generated by the cutting-edge GPT-4 model. Detailed implementationstrategies and illustrative examples are provided to demonstrate thesuperiority of our proposed solution.",,arXiv,"['cs.cl', 'cs.ai']",, efficient blackbox adversarial attacks on neural text detectors,"['Vitalii Fishchuk', 'Daniel Braun']",http://arxiv.org/pdf/2311.01873v1.pdf,2023-11-03,," Neural text detectors are models trained to detect whether a given text wasgenerated by a language model or written by a human. In this paper, weinvestigate three simple and resource-efficient strategies (parameter tweaking,prompt engineering, and character-level mutations) to alter texts generated byGPT-3.5 that are unsuspicious or unnoticeable for humans but causemisclassification by neural text detectors. The results show that especiallyparameter tweaking and character-level mutations are effective strategies.",,arXiv,['cs.cl'],, prompted software engineering in the era of ai models,['Dae-Kyoo Kim'],http://arxiv.org/pdf/2311.03359v1.pdf,2023-09-07,," This paper introduces prompted software engineering (PSE), which integratesprompt engineering to build effective prompts for language-based AI models, toenhance the software development process. PSE enables the use of AI models insoftware development to produce high-quality software with fewer resources,automating tedious tasks and allowing developers to focus on more innovativeaspects. However, effective prompts are necessary to guide software developmentin generating accurate, relevant, and useful responses, while mitigating risksof misleading outputs. This paper describes how productive prompts should bebuilt throughout the software development cycle.",,arXiv,['cs.se'],, conversing with copilot exploring prompt engineering for solving cs1 problems using natural language,"['Paul Denny', 'Viraj Kumar', 'Nasser Giacaman']",http://arxiv.org/pdf/2210.15157v1.pdf,2022-10-27,," GitHub Copilot is an artificial intelligence model for automaticallygenerating source code from natural language problem descriptions. Since June2022, Copilot has officially been available for free to all students as aplug-in to development environments like Visual Studio Code. Prior workexploring OpenAI Codex, the underlying model that powers Copilot, has shown itperforms well on typical CS1 problems thus raising concerns about the impact itwill have on how introductory programming courses are taught. However, littleis known about the types of problems for which Copilot does not perform well,or about the natural language interactions that a student might have withCopilot when resolving errors. We explore these questions by evaluating theperformance of Copilot on a publicly available dataset of 166 programmingproblems. We find that it successfully solves around half of these problems onits very first attempt, and that it solves 60\% of the remaining problems usingonly natural language changes to the problem description. We argue that thistype of prompt engineering, which we believe will become a standard interactionbetween human and Copilot when it initially fails, is a potentially usefullearning activity that promotes computational thinking skills, and is likely tochange the nature of code writing skill development.",,arXiv,"['cs.hc', 'cs.ai']",, enhancing automated program repair through finetuning and prompt engineering,"['Rishov Paul', 'Md. Mohib Hossain', 'Mohammed Latif Siddiq', 'Masum Hasan', 'Anindya Iqbal', 'Joanna C. S. Santos']",http://arxiv.org/pdf/2304.07840v2.pdf,2023-04-16,," Sequence-to-sequence models have been used to transform erroneous programsinto correct ones when trained with a large enough dataset. Some recent studiesalso demonstrated strong empirical evidence that code review could improve theprogram repair further. Large language models, trained with Natural Language(NL) and Programming Language (PL), can contain inherent knowledge of both. Inthis study, we investigate if this inherent knowledge of PL and NL can beutilized to improve automated program repair. We applied PLBART and CodeT5, twostate-of-the-art language models that are pre-trained with both PL and NL, ontwo such natural language-based program repair datasets and found that thepre-trained language models fine-tuned with datasets containing both codereview and subsequent code changes notably outperformed each of the previousmodels. With the advent of code generative models like Codex and GPT-3.5-Turbo,we also performed zero-shot and few-shots learning-based prompt engineering toassess their performance on these datasets. However, the practical applicationof using LLMs in the context of automated program repair is still a long wayoff based on our manual analysis of the generated repaired codes by thelearning models.",,arXiv,"['cs.lg', 'cs.se']",, cheapfake detection with llm using prompt engineering,"['Guangyang Wu', 'Weijie Wu', 'Xiaohong Liu', 'Kele Xu', 'Tianjiao Wan', 'Wenyi Wang']",http://arxiv.org/pdf/2306.02776v1.pdf,2023-06-05,," The misuse of real photographs with conflicting image captions in news itemsis an example of the out-of-context (OOC) misuse of media. In order to detectOOC media, individuals must determine the accuracy of the statement andevaluate whether the triplet (~\textit{i.e.}, the image and two captions)relates to the same event. This paper presents a novel learnable approach fordetecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. Theproposed method is based on the COSMOS structure, which assesses the coherencebetween an image and captions, as well as between two captions. We enhance thebaseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as afeature extractor. Specifically, we propose an innovative approach to featureextraction utilizing prompt engineering to develop a robust and reliablefeature extractor with GPT3.5 model. The proposed method captures thecorrelation between two captions and effectively integrates this module intothe COSMOS baseline model, which allows for a deeper understanding of therelationship between captions. By incorporating this module, we demonstrate thepotential for significant improvements in cheap-fakes detection performance.The proposed methodology holds promising implications for various applicationssuch as natural language processing, image captioning, and text-to-imagesynthesis. Docker for submission is available athttps://hub.docker.com/repository/docker/mulns/ acmmmcheapfakes.",,arXiv,['cs.cv'],, improving knowledge extraction from llms for task learning through agent analysis,"['James R. Kirk', 'Robert E. Wray', 'Peter Lindes']",http://arxiv.org/pdf/2306.06770v3.pdf,2023-06-11,," Large language models (LLMs) offer significant promise as a knowledge sourcefor task learning. Prompt engineering has been shown to be effective foreliciting knowledge from an LLM, but alone it is insufficient for acquiringrelevant, situationally grounded knowledge for an embodied agent learning noveltasks. We describe a cognitive-agent approach that extends and complementsprompt engineering, mitigating its limitations and thus enabling an agent toacquire new task knowledge matched to its native language capabilities,embodiment, environment, and user preferences. The approach is to increase theresponse space of LLMs and deploy general strategies, embedded within theautonomous agent, to evaluate, repair, and select among candidate responsesproduced by the LLM. We describe the approach and experiments that show how anagent, by retrieving and evaluating a breadth of responses from the LLM, canachieve 77-94% task completion in one-shot learning without user oversight. Theapproach achieves 100% task completion when human oversight (such as anindication of preference) is provided. Further, the type of oversight largelyshifts from explicit, natural language instruction to simpleconfirmation/discomfirmation of high-quality responses that have been vetted bythe agent before presentation to a user.",,arXiv,"['cs.ai', 'cs.hc', 'cs.ro', 'i.2.6; i.2.7']",, texttosql empowered by large language models a benchmark evaluation,"['Dawei Gao', 'Haibin Wang', 'Yaliang Li', 'Xiuyu Sun', 'Yichen Qian', 'Bolin Ding', 'Jingren Zhou']",http://arxiv.org/pdf/2308.15363v4.pdf,2023-08-29,," Large language models (LLMs) have emerged as a new paradigm for Text-to-SQLtask. However, the absence of a systematical benchmark inhibits the developmentof designing effective, efficient and economic LLM-based Text-to-SQL solutions.To address this challenge, in this paper, we first conduct a systematical andextensive comparison over existing prompt engineering methods, includingquestion representation, example selection and example organization, and withthese experimental results, we elaborate their pros and cons. Based on thesefindings, we propose a new integrated solution, named DAIL-SQL, which refreshesthe Spider leaderboard with 86.6% execution accuracy and sets a new bar. Toexplore the potential of open-source LLM, we investigate them in variousscenarios, and further enhance their performance with supervised fine-tuning.Our explorations highlight open-source LLMs' potential in Text-to-SQL, as wellas the advantages and disadvantages of the supervised fine-tuning.Additionally, towards an efficient and economic LLM-based Text-to-SQL solution,we emphasize the token efficiency in prompt engineering and compare the priorstudies under this metric. We hope that our work provides a deeperunderstanding of Text-to-SQL with LLMs, and inspires further investigations andbroad applications.",,arXiv,"['cs.db', 'cs.cl', 'cs.lg']",, understanding prompt engineering may not require rethinking generalization,"['Victor Akinwande', 'Yiding Jiang', 'Dylan Sam', 'J. Zico Kolter']",http://arxiv.org/pdf/2310.03957v1.pdf,2023-10-06,," Zero-shot learning in prompted vision-language models, the practice ofcrafting prompts to build classifiers without an explicit training process, hasachieved impressive performance in many settings. This success presents aseemingly surprising observation: these methods suffer relatively little fromoverfitting, i.e., when a prompt is manually engineered to achieve low error ona given training set (thus rendering the method no longer actually zero-shot),the approach still performs well on held-out test data. In this paper, we showthat we can explain such performance well via recourse to classical PAC-Bayesbounds. Specifically, we show that the discrete nature of prompts, combinedwith a PAC-Bayes prior given by a language model, results in generalizationbounds that are remarkably tight by the standards of the literature: forinstance, the generalization bound of an ImageNet classifier is often within afew percentage points of the true test error. We demonstrate empirically thatthis holds for existing handcrafted prompts and prompts generated throughsimple greedy search. Furthermore, the resulting bound is well-suited for modelselection: the models with the best bound typically also have the best testperformance. This work thus provides a possible justification for thewidespread practice of prompt engineering, even if it seems that such methodscould potentially overfit the training data.",,arXiv,"['cs.lg', 'cs.cv']",, configuration validation with large language models,"['Xinyu Lian', 'Yinfang Chen', 'Runxiang Cheng', 'Jie Huang', 'Parth Thakkar', 'Tianyin Xu']",http://arxiv.org/pdf/2310.09690v1.pdf,2023-10-15,," Misconfigurations are the major causes of software failures. Existingconfiguration validation techniques rely on manually written rules or testcases, which are expensive to implement and maintain, and are hard to becomprehensive. Leveraging machine learning (ML) and natural language processing(NLP) for configuration validation is considered a promising direction, but hasbeen facing challenges such as the need of not only large-scale configurationdata, but also system-specific features and models which are hard togeneralize. Recent advances in Large Language Models (LLMs) show the promisesto address some of the long-lasting limitations of ML/NLP-based configurationvalidation techniques. In this paper, we present an exploratory analysis on thefeasibility and effectiveness of using LLMs like GPT and Codex forconfiguration validation. Specifically, we take a first step to empiricallyevaluate LLMs as configuration validators without additional fine-tuning orcode generation. We develop a generic LLM-based validation framework, namedCiri, which integrates different LLMs. Ciri devises effective promptengineering with few-shot learning based on both valid configuration andmisconfiguration data. Ciri also validates and aggregates the outputs of LLMsto generate validation results, coping with known hallucination andnondeterminism of LLMs. We evaluate the validation effectiveness of Ciri onfive popular LLMs using configuration data of six mature, widely deployedopen-source systems. Our analysis (1) confirms the potential of using LLMs forconfiguration validation, (2) understands the design space of LLMbasedvalidators like Ciri, especially in terms of prompt engineering with few-shotlearning, and (3) reveals open challenges such as ineffectiveness in detectingcertain types of misconfigurations and biases to popular configurationparameters.",,arXiv,"['cs.se', 'cs.ai', 'cs.os']",, learning to prompt for visionlanguage models,"['Kaiyang Zhou', 'Jingkang Yang', 'Chen Change Loy', 'Ziwei Liu']",http://arxiv.org/pdf/2109.01134v6.pdf,2021-09-02,," Large pre-trained vision-language models like CLIP have shown great potentialin learning representations that are transferable across a wide range ofdownstream tasks. Different from the traditional representation learning thatis based mostly on discretized labels, vision-language pre-training alignsimages and texts in a common feature space, which allows zero-shot transfer toa downstream task via prompting, i.e., classification weights are synthesizedfrom natural language describing classes of interest. In this work, we showthat a major challenge for deploying such models in practice is promptengineering, which requires domain expertise and is extremely time-consuming --one needs to spend a significant amount of time on words tuning since a slightchange in wording could have a huge impact on performance. Inspired by recentadvances in prompt learning research in natural language processing (NLP), wepropose Context Optimization (CoOp), a simple approach specifically foradapting CLIP-like vision-language models for downstream image recognition.Concretely, CoOp models a prompt's context words with learnable vectors whilethe entire pre-trained parameters are kept fixed. To handle different imagerecognition tasks, we provide two implementations of CoOp: unified context andclass-specific context. Through extensive experiments on 11 datasets, wedemonstrate that CoOp requires as few as one or two shots to beat hand-craftedprompts with a decent margin and is able to gain significant improvements overprompt engineering with more shots, e.g., with 16 shots the average gain isaround 15% (with the highest reaching over 45%). Despite being a learning-basedapproach, CoOp achieves superb domain generalization performance compared withthe zero-shot model using hand-crafted prompts.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, an empirical study on fewshot knowledge probing for pretrained language models,"['Tianxing He', 'Kyunghyun Cho', 'James Glass']",http://arxiv.org/pdf/2109.02772v2.pdf,2021-09-06,," Prompt-based knowledge probing for 1-hop relations has been used to measurehow much world knowledge is stored in pretrained language models. Existing workuses considerable amounts of data to tune the prompts for better performance.In this work, we compare a variety of approaches under a few-shot knowledgeprobing setting, where only a small number (e.g., 10 or 20) of example triplesare available. In addition, we create a new dataset named TREx-2p, whichcontains 2-hop relations. We report that few-shot examples can strongly boostthe probing performance for both 1-hop and 2-hop relations. In particular, wefind that a simple-yet-effective approach of finetuning the bias vectors in themodel outperforms existing prompt-engineering methods. Our dataset and code areavailable at \url{https://github.com/cloudygoose/fewshot_lama}.",,arXiv,['cs.ai'],, solving probability and statistics problems by program synthesis,"['Leonard Tang', 'Elizabeth Ke', 'Nikhil Singh', 'Nakul Verma', 'Iddo Drori']",http://arxiv.org/pdf/2111.08267v1.pdf,2021-11-16,," We solve university level probability and statistics questions by programsynthesis using OpenAI's Codex, a Transformer trained on text and fine-tuned oncode. We transform course problems from MIT's 18.05 Introduction to Probabilityand Statistics and Harvard's STAT110 Probability into programming tasks. Wethen execute the generated code to get a solution. Since these course questionsare grounded in probability, we often aim to have Codex generate probabilisticprograms that simulate a large number of probabilistic dependencies to computeits solution. Our approach requires prompt engineering to transform thequestion from its original form to an explicit, tractable form that results ina correct program and solution. To estimate the amount of work needed totranslate an original question into its tractable form, we measure thesimilarity between original and transformed questions. Our work is the first tointroduce a new dataset of university-level probability and statistics problemsand solve these problems in a scalable fashion using the program synthesiscapabilities of large language models.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",, polyglot prompt multilingual multitask promptraining,"['Jinlan Fu', 'See-Kiong Ng', 'Pengfei Liu']",http://arxiv.org/pdf/2204.14264v2.pdf,2022-04-29,," This paper aims for a potential architectural improvement for multilinguallearning and asks: Can different tasks from different languages be modeled in amonolithic framework, i.e. without any task/language-specific module? Thebenefit of achieving this could open new doors for future multilingualresearch, including allowing systems trained on low resources to be furtherassisted by other languages as well as other tasks. We approach this goal bydeveloping a learning framework named Polyglot Prompting to exploit promptingmethods for learning a unified semantic space for different languages and taskswith multilingual prompt engineering. We performed a comprehensive evaluationof 6 tasks, namely topic classification, sentiment classification, named entityrecognition, question answering, natural language inference, and summarization,covering 24 datasets and 49 languages. The experimental results demonstratedthe efficacy of multilingual multitask prompt-based learning and led toinspiring observations. We also present an interpretable multilingualevaluation methodology and show how the proposed framework, multilingualmultitask prompt training, works. We release all datasets prompted in the bestsetting and code.",,arXiv,['cs.cl'],, clipclop clipguided collage and photomontage,"['Piotr Mirowski', 'Dylan Banarse', 'Mateusz Malinowski', 'Simon Osindero', 'Chrisantha Fernando']",http://arxiv.org/pdf/2205.03146v3.pdf,2022-05-06,," The unabated mystique of large-scale neural networks, such as the CLIP dualimage-and-text encoder, popularized automatically generated art. Increasinglymore sophisticated generators enhanced the artworks' realism and visualappearance, and creative prompt engineering enabled stylistic expression.Guided by an artist-in-the-loop ideal, we design a gradient-based generator toproduce collages. It requires the human artist to curate libraries of imagepatches and to describe (with prompts) the whole image composition, with theoption to manually adjust the patches' positions during generation, therebyallowing humans to reclaim some control of the process and achieve greatercreative freedom. We explore the aesthetic potentials of high-resolutioncollages, and provide an open-source Google Colab as an artistic tool.",,arXiv,"['cs.cv', 'cs.ai']",, the creativity of texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2206.02904v4.pdf,2022-05-13,," Text-guided synthesis of images has made a giant leap towards becoming amainstream phenomenon. With text-to-image generation systems, anybody cancreate digital images and artworks. This provokes the question of whethertext-to-image generation is creative. This paper expounds on the nature ofhuman creativity involved in text-to-image art (so-called ""AI art"") with aspecific focus on the practice of prompt engineering. The paper argues that thecurrent product-centered view of creativity falls short in the context oftext-to-image generation. A case exemplifying this shortcoming is provided andthe importance of online communities for the creative ecosystem oftext-to-image art is highlighted. The paper provides a high-level summary ofthis online ecosystem drawing on Rhodes' conceptual four P model of creativity.Challenges for evaluating the creativity of text-to-image generation andopportunities for research on text-to-image generation in the field ofHuman-Computer Interaction (HCI) are discussed.",,arXiv,"['cs.hc', 'cs.gr', 'h.5; h.m']",, rationaleaugmented ensembles in language models,"['Xuezhi Wang', 'Jason Wei', 'Dale Schuurmans', 'Quoc Le', 'Ed Chi', 'Denny Zhou']",http://arxiv.org/pdf/2207.00747v1.pdf,2022-07-02,," Recent research has shown that rationales, or step-by-step chains of thought,can be used to improve performance in multi-step reasoning tasks. We reconsiderrationale-augmented prompting for few-shot in-context learning, where (input ->output) prompts are expanded to (input, rationale -> output) prompts. Forrationale-augmented prompting we demonstrate how existing approaches, whichrely on manual prompt engineering, are subject to sub-optimal rationales thatmay harm performance. To mitigate this brittleness, we propose a unifiedframework of rationale-augmented ensembles, where we identify rationalesampling in the output space as the key component to robustly improveperformance. This framework is general and can easily be extended to commonnatural language processing tasks, even those that do not traditionallyleverage intermediate steps, such as question answering, word sensedisambiguation, and sentiment analysis. We demonstrate that rationale-augmentedensembles achieve more accurate and interpretable results than existingprompting approaches--including standard prompting without rationales andrationale-based chain-of-thought prompting--while simultaneously improvinginterpretability of model predictions through the associated rationales.",,arXiv,['cs.cl'],, will it blend mixing training paradigms & prompting for argument quality prediction,"['Michiel van der Meer', 'Myrthe Reuver', 'Urja Khurana', 'Lea Krause', 'Selene Báez Santamaría']",http://arxiv.org/pdf/2209.08966v2.pdf,2022-09-19,," This paper describes our contributions to the Shared Task of the 9th Workshopon Argument Mining (2022). Our approach uses Large Language Models for the taskof Argument Quality Prediction. We perform prompt engineering using GPT-3, andalso investigate the training paradigms multi-task learning, contrastivelearning, and intermediate-task training. We find that a mixed prediction setupoutperforms single models. Prompting GPT-3 works best for predicting argumentvalidity, and argument novelty is best estimated by a model trained using allthree training paradigms.",,arXiv,"['cs.cl', 'cs.ai']",, controllable image captioning via prompting,"['Ning Wang', 'Jiahao Xie', 'Jihao Wu', 'Mingbo Jia', 'Linlin Li']",http://arxiv.org/pdf/2212.01803v1.pdf,2022-12-04,," Despite the remarkable progress of image captioning, existing captionerstypically lack the controllable capability to generate desired image captions,e.g., describing the image in a rough or detailed manner, in a factual oremotional view, etc. In this paper, we show that a unified model is qualifiedto perform well in diverse domains and freely switch among multiple styles.Such a controllable capability is achieved by embedding the prompt learninginto the image captioning framework. To be specific, we design a set of promptsto fine-tune the pre-trained image captioner. These prompts allow the model toabsorb stylized data from different domains for joint training, withoutperformance degradation in each domain. Furthermore, we optimize the promptswith learnable vectors in the continuous word embedding space, avoiding theheuristic prompt engineering and meanwhile exhibiting superior performance. Inthe inference stage, our model is able to generate desired stylized captions bychoosing the corresponding prompts. Extensive experiments verify thecontrollable capability of the proposed method. Notably, we achieve outstandingperformance on two diverse image captioning benchmarks including COCO Karpathysplit and TextCaps using a unified model.",,arXiv,['cs.cv'],, explanation regeneration via information bottleneck,"['Qintong Li', 'Zhiyong Wu', 'Lingpeng Kong', 'Wei Bi']",http://arxiv.org/pdf/2212.09603v2.pdf,2022-12-19,," Explaining the black-box predictions of NLP models naturally and accuratelyis an important open problem in natural language generation. These free-textexplanations are expected to contain sufficient and carefully-selected evidenceto form supportive arguments for predictions. Due to the superior generativecapacity of large pretrained language models, recent work built on promptengineering enables explanation generation without specific training. However,explanation generated through single-pass prompting often lacks sufficiency andconciseness. To address this problem, we develop an information bottleneckmethod EIB to produce refined explanations that are sufficient and concise. Ourapproach regenerates the free-text explanation by polishing the single-passoutput from the pretrained language model but retaining the information thatsupports the contents being explained. Experiments on two out-of-domain tasksverify the effectiveness of EIB through automatic evaluation andthoroughly-conducted human evaluation.",,arXiv,['cs.cl'],, uprise universal prompt retrieval for improving zeroshot evaluation,"['Daixuan Cheng', 'Shaohan Huang', 'Junyu Bi', 'Yuefeng Zhan', 'Jianfeng Liu', 'Yujing Wang', 'Hao Sun', 'Furu Wei', 'Denvy Deng', 'Qi Zhang']",http://arxiv.org/pdf/2303.08518v4.pdf,2023-03-15,," Large Language Models (LLMs) are popular for their impressive abilities, butthe need for model-specific fine-tuning or task-specific prompt engineering canhinder their generalization. We propose UPRISE (Universal Prompt Retrieval forImproving zero-Shot Evaluation), which tunes a lightweight and versatileretriever that automatically retrieves prompts for a given zero-shot taskinput. Specifically, we demonstrate universality in a cross-task andcross-model scenario: the retriever is tuned on a diverse set of tasks, buttested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, fortuning the retriever, but test the retriever on different LLMs of much largerscales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show thatUPRISE mitigates the hallucination problem in our experiments with ChatGPT,suggesting its potential to improve even the strongest LLMs. Our model and codeare available at https://github.com/microsoft/LMOps.",,arXiv,['cs.cl'],, patchtoken aligned bayesian prompt learning for visionlanguage models,"['Xinyang Liu', 'Dongsheng Wang', 'Miaoge Li', 'Zhibin Duan', 'Yishi Xu', 'Bo Chen', 'Mingyuan Zhou']",http://arxiv.org/pdf/2303.09100v1.pdf,2023-03-16,," For downstream applications of vision-language pre-trained models, there hasbeen significant interest in constructing effective prompts. Existing works onprompt engineering, which either require laborious manual designs or optimizethe prompt tuning as a point estimation problem, may fail to describe diversecharacteristics of categories and limit their applications. We introduce aBayesian probabilistic resolution to prompt learning, where the label-specificstochastic prompts are generated hierarchically by first sampling a latentvector from an underlying distribution and then employing a lightweightgenerative model. Importantly, we semantically regularize prompt learning withthe visual knowledge and view images and the corresponding prompts as patch andtoken sets under optimal transport, which pushes the prompt tokens tofaithfully capture the label-specific visual concepts, instead of overfittingthe training categories. Moreover, the proposed model can also bestraightforwardly extended to the conditional case where theinstance-conditional prompts are generated to improve the generalizability.Extensive experiments on 15 datasets show promising transferability andgeneralization performance of our proposed model.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, safety analysis in the era of large language models a case study of stpa using chatgpt,"['Yi Qi', 'Xingyu Zhao', 'Siddartha Khastgir', 'Xiaowei Huang']",http://arxiv.org/pdf/2304.01246v3.pdf,2023-04-03,," Can safety analysis make use of Large Language Models (LLMs)? A case studyexplores Systems Theoretic Process Analysis (STPA) applied to AutomaticEmergency Brake (AEB) and Electricity Demand Side Management (DSM) systemsusing ChatGPT. We investigate how collaboration schemes, input semanticcomplexity, and prompt guidelines influence STPA results. Comparative resultsshow that using ChatGPT without human intervention may be inadequate due toreliability related issues, but with careful design, it may outperform humanexperts. No statistically significant differences are found when varying theinput semantic complexity or using common prompt guidelines, which suggests thenecessity for developing domain-specific prompt engineering. We also highlightfuture challenges, including concerns about LLM trustworthiness and thenecessity for standardisation and regulation in this domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.se']",, constructing dreams using generative ai,"['Safinah Ali', 'Daniella DiPaola', 'Randi Williams', 'Prerna Ravi', 'Cynthia Breazeal']",http://arxiv.org/pdf/2305.12013v1.pdf,2023-05-19,," Generative AI tools introduce new and accessible forms of media creation foryouth. They also raise ethical concerns about the generation of fake media,data protection, privacy and ownership of AI-generated art. Since generative AIis already being used in products used by youth, it is critical that theyunderstand how these tools work and how they can be used or misused. In thiswork, we facilitated students' generative AI learning through expression oftheir imagined future identities. We designed a learning workshop - Dreamingwith AI - where students learned about the inner workings of generative AItools, used text-to-image generation algorithms to create their imaged futuredreams, reflected on the potential benefits and harms of generative AI toolsand voiced their opinions about policies for the use of these tools inclassrooms. In this paper, we present the learning activities and experiencesof 34 high school students who engaged in our workshops. Students reachedcreative learning objectives by using prompt engineering to create their futuredreams, gained technical knowledge by learning the abilities, limitations,text-visual mappings and applications of generative AI, and identified mostpotential societal benefits and harms of generative AI.",,arXiv,"['cs.hc', 'cs.ai', 'cs.cy']",, cona a novel contextaware instruction paradigm for communication using large language model,"['Nan Zhou', 'Xinghui Tao', 'Xi Chen']",http://arxiv.org/pdf/2305.18620v1.pdf,2023-05-26,," We introduce CONA, a novel context-aware instruction paradigm for effectiveknowledge dissemination using generative pre-trained transformer (GPT) models.CONA is a flexible framework designed to leverage the capabilities of LargeLanguage Models (LLMs) and incorporate DIKW (Data, Information, Knowledge,Wisdom) hierarchy to automatically instruct and optimise presentation content,anticipate potential audience inquiries, and provide context-aware answers thatadaptive to the knowledge level of the audience group. The unique aspect of theCONA paradigm lies in its combination of an independent advisory mechanism anda recursive feedback loop rooted on the DIKW hierarchy. This synergysignificantly enhances context-aware contents, ensuring they are accessible andeasily comprehended by the audience. This paradigm is an early pioneer toexplore new methods for knowledge dissemination and communication in the LLMera, offering effective support for everyday knowledge sharing scenarios. Weconduct experiments on a range of audience roles, along with materials fromvarious disciplines using GPT4. Both quantitative and qualitative resultsdemonstrated that the proposed CONA paradigm achieved remarkable performancecompared to the outputs guided by conventional prompt engineering.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, gpt4tools teaching large language model to use tools via selfinstruction,"['Rui Yang', 'Lin Song', 'Yanwei Li', 'Sijie Zhao', 'Yixiao Ge', 'Xiu Li', 'Ying Shan']",http://arxiv.org/pdf/2305.18752v1.pdf,2023-05-30,," This paper aims to efficiently enable Large Language Models (LLMs) to usemultimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, haveshown great potential for tool usage through sophisticated prompt engineering.Nevertheless, these models typically rely on prohibitive computational costsand publicly inaccessible data. To address these challenges, we propose theGPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA andOPT, to use tools. It generates an instruction-following dataset by promptingan advanced teacher with various multi-modal contexts. By using the Low-RankAdaptation (LoRA) optimization, our approach facilitates the open-source LLMsto solve a range of visual problems, including visual comprehension and imagegeneration. Moreover, we provide a benchmark to evaluate the ability of LLMs touse tools, which is performed in both zero-shot and fine-tuning ways. Extensiveexperiments demonstrate the effectiveness of our method on various languagemodels, which not only significantly improves the accuracy of invoking seentools, but also enables the zero-shot capacity for unseen tools. The code anddemo are available at https://github.com/StevenGrove/GPT4Tools.",,arXiv,"['cs.cv', 'cs.cl']",, an approach to solving the abstraction and reasoning corpus (arc) challenge,['Tan John Chong Min'],http://arxiv.org/pdf/2306.03553v1.pdf,2023-06-06,," We utilise the power of Large Language Models (LLMs), in particular GPT4, tobe prompt engineered into performing an arbitrary task. Here, we give the modelsome human priors via text, along with some typical procedures for solving theARC tasks, and ask it to generate the i) broad description of the input-outputrelation, ii) detailed steps of the input-output mapping, iii) use the detailedsteps to perform manipulation on the test input and derive the test output. Thecurrent GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (thosewith small grids of 8x8 and below). With tweaks to the prompt to make it morespecific for the use case, it can solve more. We posit that when scaled to amulti-agent system with usage of past memory and equipped with an imageinterpretation tool via Visual Question Answering, we may actually be able tosolve the majority of the ARC challenge",,arXiv,['cs.ai'],, falle a foley sound synthesis model and strategies,"['Minsung Kang', 'Sangshin Oh', 'Hyeongi Moon', 'Kyungyun Lee', 'Ben Sangbae Chon']",http://arxiv.org/pdf/2306.09807v2.pdf,2023-06-16,," This paper introduces FALL-E, a foley synthesis system and itstraining/inference strategies. The FALL-E model employs a cascaded approachcomprising low-resolution spectrogram generation, spectrogram super-resolution,and a vocoder. We trained every sound-related model from scratch using ourextensive datasets, and utilized a pre-trained language model. We conditionedthe model with dataset-specific texts, enabling it to learn sound quality andrecording environment based on text input. Moreover, we leveraged externallanguage models to improve text descriptions of our datasets and performedprompt engineering for quality, coherence, and diversity. FALL-E was evaluatedby an objective measure as well as listening tests in the DCASE 2023 challengeTask 7. The submission achieved the second place on average, while achievingthe best score for diversity, second place for audio quality, and third placefor class fitness.",,arXiv,"['eess.as', 'cs.lg', 'cs.sd']",, the cultivated practices of texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2306.11393v1.pdf,2023-06-20,," Humankind is entering a novel creative era in which anybody can synthesizedigital information using generative artificial intelligence (AI).Text-to-image generation, in particular, has become vastly popular and millionsof practitioners produce AI-generated images and AI art online. This chapterfirst gives an overview of the key developments that enabled a healthyco-creative online ecosystem around text-to-image generation to rapidly emerge,followed by a high-level description of key elements in this ecosystem. Aparticular focus is placed on prompt engineering, a creative practice that hasbeen embraced by the AI art community. It is then argued that the emergingco-creative ecosystem constitutes an intelligent system on its own - a systemthat both supports human creativity, but also potentially entraps futuregenerations and limits future development efforts in AI. The chapter discussesthe potential risks and dangers of cultivating this co-creative ecosystem, suchas the bias inherent in today's training data, potential quality degradation infuture image generation systems due to synthetic data becoming common place,and the potential long-term effects of text-to-image generation on people'simagination, ambitions, and development.",,arXiv,"['cs.cy', 'cs.ai', 'k.4; j.5; i.2.0; k.5.m']",, chitchat or deep talk prompt engineering for process mining,"['Urszula Jessen', 'Michal Sroka', 'Dirk Fahland']",http://arxiv.org/pdf/2307.09909v1.pdf,2023-07-19,," This research investigates the application of Large Language Models (LLMs) toaugment conversational agents in process mining, aiming to tackle its inherentcomplexity and diverse skill requirements. While LLM advancements present novelopportunities for conversational process mining, generating efficient outputsis still a hurdle. We propose an innovative approach that amend many issues inexisting solutions, informed by prior research on Natural Language Processing(NLP) for conversational agents. Leveraging LLMs, our framework improves bothaccessibility and agent performance, as demonstrated by experiments on publicquestion and data sets. Our research sets the stage for future explorationsinto LLMs' role in process mining and concludes with propositions for enhancingLLM memory, implementing real-time user testing, and examining diverse datasets.",,arXiv,['cs.ai'],, sentimentgpt exploiting gpt for advanced sentiment analysis and its departure from current machine learning,"['Kiana Kheiri', 'Hamid Karimi']",http://arxiv.org/pdf/2307.10234v2.pdf,2023-07-16,," This study presents a thorough examination of various Generative PretrainedTransformer (GPT) methodologies in sentiment analysis, specifically in thecontext of Task 4 on the SemEval 2017 dataset. Three primary strategies areemployed: 1) prompt engineering using the advanced GPT-3.5 Turbo, 2)fine-tuning GPT models, and 3) an inventive approach to embeddingclassification. The research yields detailed comparative insights among thesestrategies and individual GPT models, revealing their unique strengths andpotential limitations. Additionally, the study compares these GPT-basedmethodologies with other current, high-performing models previously used withthe same dataset. The results illustrate the significant superiority of the GPTapproaches in terms of predictive performance, more than 22\% in F1-scorecompared to the state-of-the-art. Further, the paper sheds light on commonchallenges in sentiment analysis tasks, such as understanding context anddetecting sarcasm. It underscores the enhanced capabilities of the GPT modelsto effectively handle these complexities. Taken together, these findingshighlight the promising potential of GPT models in sentiment analysis, settingthe stage for future research in this field. The code can be found athttps://github.com/DSAatUSU/SentimentGPT",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.si']",, domain knowledge distillation from large language model an empirical study in the autonomous driving domain,"['Yun Tang', 'Antonio A. Bruto da Costa', 'Jason Zhang', 'Irvine Patrick', 'Siddartha Khastgir', 'Paul Jennings']",http://arxiv.org/pdf/2307.11769v1.pdf,2023-07-17,," Engineering knowledge-based (or expert) systems require extensive manualeffort and domain knowledge. As Large Language Models (LLMs) are trained usingan enormous amount of cross-domain knowledge, it becomes possible to automatesuch engineering processes. This paper presents an empirical automation andsemi-automation framework for domain knowledge distillation using promptengineering and the LLM ChatGPT. We assess the framework empirically in theautonomous driving domain and present our key observations. In ourimplementation, we construct the domain knowledge ontology by ""chatting"" withChatGPT. The key finding is that while fully automated domain ontologyconstruction is possible, human supervision and early intervention typicallyimprove efficiency and output quality as they lessen the effects of responserandomness and the butterfly effect. We, therefore, also develop a web-baseddistillation assistant enabling supervision and flexible intervention atruntime. We hope our findings and tools could inspire future research towardrevolutionizing the engineering of knowledge-based systems across applicationdomains.",,arXiv,['cs.cl'],, do llms possess a personality making the mbti test an amazing evaluation for large language models,"['Keyu Pan', 'Yawen Zeng']",http://arxiv.org/pdf/2307.16180v1.pdf,2023-07-30,," The field of large language models (LLMs) has made significant progress, andtheir knowledge storage capacity is approaching that of human beings.Furthermore, advanced techniques, such as prompt learning and reinforcementlearning, are being employed to address ethical concerns and hallucinationproblems associated with LLMs, bringing them closer to aligning with humanvalues. This situation naturally raises the question of whether LLMs withhuman-like abilities possess a human-like personality? In this paper, we aim toinvestigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), awidespread human personality assessment tool, as an evaluation metric for LLMs.Specifically, extensive experiments will be conducted to explore: 1) thepersonality types of different LLMs, 2) the possibility of changing thepersonality types by prompt engineering, and 3) How does the training datasetaffect the model's personality. Although the MBTI is not a rigorous assessment,it can still reflect the similarity between LLMs and human personality. Inpractice, the MBTI has the potential to serve as a rough indicator. Our codesare available athttps://github.com/HarderThenHarder/transformers_tasks/tree/main/LLM/llms_mbti.",,arXiv,['cs.cl'],, alphagpt humanai interactive alpha mining for quantitative investment,"['Saizhuo Wang', 'Hang Yuan', 'Leon Zhou', 'Lionel M. Ni', 'Heung-Yeung Shum', 'Jian Guo']",http://arxiv.org/pdf/2308.00016v1.pdf,2023-07-31,," One of the most important tasks in quantitative investment research is miningnew alphas (effective trading signals or factors). Traditional alpha miningmethods, either hand-crafted factor synthesizing or algorithmic factor mining(e.g., search with genetic programming), have inherent limitations, especiallyin implementing the ideas of quants. In this work, we propose a new alphamining paradigm by introducing human-AI interaction, and a novel promptengineering algorithmic framework to implement this paradigm by leveraging thepower of large language models. Moreover, we develop Alpha-GPT, a newinteractive alpha mining system framework that provides a heuristic way to``understand'' the ideas of quant researchers and outputs creative, insightful,and effective alphas. We demonstrate the effectiveness and advantage ofAlpha-GPT via a number of alpha mining experiments.",,arXiv,"['q-fin.cp', 'cs.ai', 'cs.cl']",, optimizing machine translation through prompt engineering an investigation into chatgpt's customizability,['Masaru Yamada'],http://arxiv.org/pdf/2308.01391v1.pdf,2023-08-02,," This paper explores the influence of integrating the purpose of thetranslation and the target audience into prompts on the quality of translationsproduced by ChatGPT. Drawing on previous translation studies, industrypractices, and ISO standards, the research underscores the significance of thepre-production phase in the translation process. The study reveals that theinclusion of suitable prompts in large-scale language models like ChatGPT canyield flexible translations, a feat yet to be realized by conventional MachineTranslation (MT). The research scrutinizes the changes in translation qualitywhen prompts are used to generate translations that meet specific conditions.The evaluation is conducted from a practicing translator's viewpoint, bothsubjectively and qualitatively, supplemented by the use of OpenAI's wordembedding API for cosine similarity calculations. The findings suggest that theintegration of the purpose and target audience into prompts can indeed modifythe generated translations, generally enhancing the translation quality byindustry standards. The study also demonstrates the practical application ofthe ""good translation"" concept, particularly in the context of marketingdocuments and culturally dependent idioms.",,arXiv,['cs.cl'],, interact exploring the potentials of chatgpt as a cooperative agent,"['Po-Lin Chen', 'Cheng-Shang Chang']",http://arxiv.org/pdf/2308.01552v1.pdf,2023-08-03,," This research paper delves into the integration of OpenAI's ChatGPT intoembodied agent systems, evaluating its influence on interactive decision-makingbenchmark. Drawing a parallel to the concept of people assuming roles accordingto their unique strengths, we introduce InterAct. In this approach, we feedChatGPT with varied prompts, assigning it a numerous roles like a checker and asorter, then integrating them with the original language model. Our researchshows a remarkable success rate of 98% in AlfWorld, which consists of 6different tasks in a simulated household environment, emphasizing thesignificance of proficient prompt engineering. The results highlight ChatGPT'scompetence in comprehending and performing intricate tasks effectively inreal-world settings, thus paving the way for further advancements in taskplanning.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",, data race detection using large language models,"['Le Chen', 'Xianzhong Ding', 'Murali Emani', 'Tristan Vanderbruggen', 'Pei-hung Lin', 'Chuanhua Liao']",http://arxiv.org/pdf/2308.07505v2.pdf,2023-08-15,," Large language models (LLMs) are demonstrating significant promise as analternate strategy to facilitate analyses and optimizations of high-performancecomputing programs, circumventing the need for resource-intensive manual toolcreation. In this paper, we explore a novel LLM-based data race detectionapproach combining prompting engineering and fine-tuning techniques. We createa dedicated dataset named DRB-ML, which is derived from DataRaceBench, withfine-grain labels showing the presence of data race pairs and their associatedvariables, line numbers, and read/write information. DRB-ML is then used toevaluate representative LLMs and fine-tune open-source ones. Our experimentshows that LLMs can be a viable approach to data race detection. However, theystill cannot compete with traditional data race detection tools when we needdetailed information about variable pairs causing data races.",,arXiv,"['cs.lg', 'cs.cl']",, datatotext generation for severely underresourced languages with gpt35 a bit of help needed from google translate,"['Michela Lorandi', 'Anya Belz']",http://arxiv.org/pdf/2308.09957v1.pdf,2023-08-19,," LLMs like GPT are great at tasks involving English which dominates in theirtraining data. In this paper, we look at how they cope with tasks involvinglanguages that are severely under-represented in their training data, in thecontext of data-to-text generation for Irish, Maltese, Welsh and Breton. Duringthe prompt-engineering phase we tested a range of prompt types and formats onGPT-3.5 and~4 with a small sample of example input/output pairs. We then fullyevaluated the two most promising prompts in two scenarios: (i) directgeneration into the under-resourced language, and (ii) generation into Englishfollowed by translation into the under-resourced language. We find thatfew-shot prompting works better for direct generation into under-resourcedlanguages, but that the difference disappears when pivoting via English. Thefew-shot + translation system variants were submitted to the WebNLG 2023 sharedtask where they outperformed competitor systems by substantial margins in alllanguages on all metrics. We conclude that good performance on under-resourcedlanguages can be achieved out-of-the box with state-of-the-art LLMs. However,our best results (for Welsh) remain well below the lowest ranked English systemat WebNLG'20.",,arXiv,"['cs.cl', 'cs.ai']",, "furchat an embodied conversational agent using llms, combining open and closeddomain dialogue with facial expressions","['Neeraj Cherakara', 'Finny Varghese', 'Sheena Shabana', 'Nivan Nelson', 'Abhiram Karukayil', 'Rohith Kulothungan', 'Mohammed Afil Farhan', 'Birthe Nesset', 'Meriam Moujahid', 'Tanvi Dinkar', 'Verena Rieser', 'Oliver Lemon']",http://arxiv.org/pdf/2308.15214v2.pdf,2023-08-29,," We demonstrate an embodied conversational agent that can function as areceptionist and generate a mixture of open and closed-domain dialogue alongwith facial expressions, by using a large language model (LLM) to develop anengaging conversation. We deployed the system onto a Furhat robot, which ishighly expressive and capable of using both verbal and nonverbal cues duringinteraction. The system was designed specifically for the National Robotariumto interact with visitors through natural conversations, providing them withinformation about the facilities, research, news, upcoming events, etc. Thesystem utilises the state-of-the-art GPT-3.5 model to generate such informationalong with domain-general conversations and facial expressions based on promptengineering.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ro']",, linking microblogging sentiments to stock price movement an application of gpt4,"['Rick Steinert', 'Saskia Altmann']",http://arxiv.org/pdf/2308.16771v1.pdf,2023-08-31,," This paper investigates the potential improvement of the GPT-4 LanguageLearning Model (LLM) in comparison to BERT for modeling same-day daily stockprice movements of Apple and Tesla in 2017, based on sentiment analysis ofmicroblogging messages. We recorded daily adjusted closing prices andtranslated them into up-down movements. Sentiment for each day was extractedfrom messages on the Stocktwits platform using both LLMs. We develop a novelmethod to engineer a comprehensive prompt for contextual sentiment analysiswhich unlocks the true capabilities of modern LLM. This enables us to carefullyretrieve sentiments, perceived advantages or disadvantages, and the relevancetowards the analyzed company. Logistic regression is used to evaluate whetherthe extracted message contents reflect stock price movements. As a result,GPT-4 exhibited substantial accuracy, outperforming BERT in five out of sixmonths and substantially exceeding a naive buy-and-hold strategy, reaching apeak accuracy of 71.47 % in May. The study also highlights the importance ofprompt engineering in obtaining desired outputs from GPT-4's contextualabilities. However, the costs of deploying GPT-4 and the need for fine-tuningprompts highlight some practical considerations for its use.",,arXiv,"['q-fin.st', 'q-fin.cp']",, fiat fusing learning paradigms with instructionaccelerated tuning,"['Xinyi Wang', 'John Wieting', 'Jonathan H. Clark']",http://arxiv.org/pdf/2309.04663v2.pdf,2023-09-09,," Learning paradigms for large language models (LLMs) currently tend to fallwithin either in-context learning (ICL) or full fine-tuning. Each of thesecomes with their own trade-offs based on available data, model size, computecost, ease-of-use, and final quality with neither solution performing wellacross-the-board. In this article, we first describe ICL and fine-tuningparadigms in a way that highlights their natural connections. Based on theseconnections, we propose a new learning paradigm called FIAT that fuses the bestof these paradigms together, enabling prompt-engineered instructions andchain-of-thought reasoning with the very largest models while also usingsimilar methods to perform parameter updates on a modestly-sized LLM withparameter-efficient tuning. We evaluate FIAT's effectiveness on a variety ofmultilingual tasks and observe that FIAT performs better than both ICL andfine-tuning at scales ranging from 100-10,000 training examples. We hope thatFIAT provides a practical way of harnessing the full potential of LLMs withoutneeding to make a hard choice between learning paradigms.",,arXiv,"['cs.cl', 'cs.ai']",, detecting natural language biases with promptbased learning,"['Md Abdul Aowal', 'Maliha T Islam', 'Priyanka Mary Mammen', 'Sandesh Shetty']",http://arxiv.org/pdf/2309.05227v1.pdf,2023-09-11,," In this project, we want to explore the newly emerging field of promptengineering and apply it to the downstream task of detecting LM biases. Moreconcretely, we explore how to design prompts that can indicate 4 differenttypes of biases: (1) gender, (2) race, (3) sexual orientation, and (4)religion-based. Within our project, we experiment with different manuallycrafted prompts that can draw out the subtle biases that may be present in thelanguage model. We apply these prompts to multiple variations of popular andwell-recognized models: BERT, RoBERTa, and T5 to evaluate their biases. Weprovide a comparative analysis of these models and assess them using a two-foldmethod: use human judgment to decide whether model predictions are biased andutilize model-level judgment (through further prompts) to understand if a modelcan self-diagnose the biases of its own prediction.",,arXiv,"['cs.cl', 'cs.ai']",, two timin' repairing smart contracts with a twolayered approach,"['Abhinav Jain', 'Ehan Masud', 'Michelle Han', 'Rohan Dhillon', 'Sumukh Rao', 'Arya Joshi', 'Salar Cheema', 'Saurav Kumar']",http://arxiv.org/pdf/2309.07841v1.pdf,2023-09-14,," Due to the modern relevance of blockchain technology, smart contracts presentboth substantial risks and benefits. Vulnerabilities within them can trigger acascade of consequences, resulting in significant losses. Many current papersprimarily focus on classifying smart contracts for malicious intent, oftenrelying on limited contract characteristics, such as bytecode or opcode. Thispaper proposes a novel, two-layered framework: 1) classifying and 2) directlyrepairing malicious contracts. Slither's vulnerability report is combined withsource code and passed through a pre-trained RandomForestClassifier (RFC) andLarge Language Models (LLMs), classifying and repairing each suggestedvulnerability. Experiments demonstrate the effectiveness of fine-tuned andprompt-engineered LLMs. The smart contract repair models, built frompre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overallvulnerability count by 97.5% and 96.7% respectively. A manual inspection ofrepaired contracts shows that all retain functionality, indicating that theproposed method is appropriate for automatic batch classification and repair ofvulnerabilities in smart contracts.",,arXiv,"['cs.cr', 'cs.ai']",, large language models for failure mode classification an investigation,"['Michael Stewart', 'Melinda Hodkiewicz', 'Sirui Li']",http://arxiv.org/pdf/2309.08181v1.pdf,2023-09-15,," In this paper we present the first investigation into the effectiveness ofLarge Language Models (LLMs) for Failure Mode Classification (FMC). FMC, thetask of automatically labelling an observation with a corresponding failuremode code, is a critical task in the maintenance domain as it reduces the needfor reliability engineers to spend their time manually analysing work orders.We detail our approach to prompt engineering to enable an LLM to predict thefailure mode of a given observation using a restricted code list. Wedemonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned onannotated data is a significant improvement over a currently available textclassification model (F1=0.60) trained on the same annotated data set. Thefine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). Thisinvestigation reinforces the need for high quality fine-tuning data sets fordomain-specific tasks using LLMs.",,arXiv,['cs.cl'],, dynacon dynamic robot planner with contextual awareness via llms,"['Gyeongmin Kim', 'Taehyeon Kim', 'Shyam Sundar Kannan', 'Vishnunandan L. N. Venkatesh', 'Donghan Kim', 'Byung-Cheol Min']",http://arxiv.org/pdf/2309.16031v1.pdf,2023-09-27,," Mobile robots often rely on pre-existing maps for effective path planning andnavigation. However, when these maps are unavailable, particularly inunfamiliar environments, a different approach become essential. This paperintroduces DynaCon, a novel system designed to provide mobile robots withcontextual awareness and dynamic adaptability during navigation, eliminatingthe reliance of traditional maps. DynaCon integrates real-time feedback with anobject server, prompt engineering, and navigation modules. By harnessing thecapabilities of Large Language Models (LLMs), DynaCon not only understandspatterns within given numeric series but also excels at categorizing objectsinto matched spaces. This facilitates dynamic path planner imbued withcontextual awareness. We validated the effectiveness of DynaCon through anexperiment where a robot successfully navigated to its goal using reasoning.Source code and experiment videos for this work can be found at:https://sites.google.com/view/dynacon.",,arXiv,['cs.ro'],, cyber sentinel exploring conversational agents in streamlining security tasks with gpt4,"['Mehrdad Kaheh', 'Danial Khosh Kholgh', 'Panos Kostakos']",http://arxiv.org/pdf/2309.16422v1.pdf,2023-09-28,," In an era where cyberspace is both a battleground and a backbone of modernsociety, the urgency of safeguarding digital assets against ever-evolvingthreats is paramount. This paper introduces Cyber Sentinel, an innovativetask-oriented cybersecurity dialogue system that is effectively capable ofmanaging two core functions: explaining potential cyber threats within anorganization to the user, and taking proactive/reactive security actions wheninstructed by the user. Cyber Sentinel embodies the fusion of artificialintelligence, cybersecurity domain expertise, and real-time data analysis tocombat the multifaceted challenges posed by cyber adversaries. This articledelves into the process of creating such a system and how it can interact withother components typically found in cybersecurity organizations. Our work is anovel approach to task-oriented dialogue systems, leveraging the power ofchaining GPT-4 models combined with prompt engineering across all sub-tasks. Wealso highlight its pivotal role in enhancing cybersecurity communication andinteraction, concluding that not only does this framework enhance the system'stransparency (Explainable AI) but also streamlines the decision-making processand responding to threats (Actionable AI), therefore marking a significantadvancement in the realm of cybersecurity communication.",,arXiv,['cs.cr'],, large language models for propaganda detection,"['Kilian Sprenkamp', 'Daniel Gordon Jones', 'Liudmila Zavolokina']",http://arxiv.org/pdf/2310.06422v2.pdf,2023-10-10,," The prevalence of propaganda in our digital society poses a challenge tosocietal harmony and the dissemination of truth. Detecting propaganda throughNLP in text is challenging due to subtle manipulation techniques and contextualdependencies. To address this issue, we investigate the effectiveness of modernLarge Language Models (LLMs) such as GPT-3 and GPT-4 for propaganda detection.We conduct experiments using the SemEval-2020 task 11 dataset, which featuresnews articles labeled with 14 propaganda techniques as a multi-labelclassification problem. Five variations of GPT-3 and GPT-4 are employed,incorporating various prompt engineering and fine-tuning strategies across thedifferent models. We evaluate the models' performance by assessing metrics suchas $F1$ score, $Precision$, and $Recall$, comparing the results with thecurrent state-of-the-art approach using RoBERTa. Our findings demonstrate thatGPT-4 achieves comparable results to the current state-of-the-art. Further,this study analyzes the potential and challenges of LLMs in complex tasks likepropaganda detection.",,arXiv,"['cs.cl', 'cs.ai']",, gptutor an opensource ai pair programming tool alternative to copilot,"['Eason Chen', 'Ray Huang', 'Justa Liang', 'Damien Chen', 'Pierce Hung']",http://arxiv.org/pdf/2310.13896v3.pdf,2023-10-21,," This paper presents the latest progress of GPTutor: a ChatGPT-poweredprogramming tool extension in Visual Studio Code. The emergence of LargeLanguage Models (LLMs) has improved software development efficiency, but theirperformance can be hindered by training data limitations and prompt designissues. Existing LLM development tools often operate as black boxes, with usersunable to view the prompts used and unable to improve performance by correctingprompts when errors occur. To address the aforementioned issues, GPTutor wasintroduced as an open-source AI pair programming tool, offering an alternativeto Copilot. GPTutor empowers users to customize prompts for various programminglanguages and scenarios, with support for 120+ human languages and 50+programming languages. Users can fine-tune prompts to correct the errors fromLLM for precision and efficient code generation. At the end of the paper, weunderscore GPTutor's potential through examples, including demonstrating itsproficiency in interpreting and generating Sui-Move, a newly introduced smartcontract language, using prompt engineering.",,arXiv,['cs.hc'],, large language models for aspectbased sentiment analysis,"['Paul F. Simmering', 'Paavo Huoviala']",http://arxiv.org/pdf/2310.18025v1.pdf,2023-10-27,," Large language models (LLMs) offer unprecedented text completioncapabilities. As general models, they can fulfill a wide range of roles,including those of more specialized models. We assess the performance of GPT-4and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-basedsentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-artF1 score of 83.8 on the joint aspect term extraction and polarityclassification task of the SemEval-2014 Task 4, improving upon InstructABSA[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000times more model parameters and thus increased inference cost. We discuss thethe cost-performance trade-offs of different models, and analyze the typicalerrors that they make. Our results also indicate that detailed prompts improveperformance in zero-shot and few-shot settings but are not necessary forfine-tuned models. This evidence is relevant for practioners that are facedwith the choice of prompt engineering versus fine-tuning when using LLMs forABSA.",,arXiv,"['cs.cl', 'cs.ai']",, noisy exemplars make large language models more robust a domainagnostic behavioral analysis,"['Hongyi Zheng', 'Abulhair Saparov']",http://arxiv.org/pdf/2311.00258v1.pdf,2023-11-01,," Recent advances in prompt engineering enable large language models (LLMs) tosolve multi-hop logical reasoning problems with impressive accuracy. However,there is little existing work investigating the robustness of LLMs withfew-shot prompting techniques. Therefore, we introduce a systematic approach totest the robustness of LLMs in multi-hop reasoning tasks via domain-agnosticperturbations. We include perturbations at multiple levels of abstractions(e.g. lexical perturbations such as typos, and semantic perturbations such asthe inclusion of intermediate reasoning steps in the questions) to conductbehavioral analysis on the LLMs. Throughout our experiments, we find thatmodels are more sensitive to certain perturbations such as replacing words withtheir synonyms. We also demonstrate that increasing the proportion of perturbedexemplars in the prompts improves the robustness of few-shot prompting methods.",,arXiv,"['cs.cl', 'cs.lg']",, instruction distillation makes large language models efficient zeroshot rankers,"['Weiwei Sun', 'Zheng Chen', 'Xinyu Ma', 'Lingyong Yan', 'Shuaiqiang Wang', 'Pengjie Ren', 'Zhumin Chen', 'Dawei Yin', 'Zhaochun Ren']",http://arxiv.org/pdf/2311.01555v1.pdf,2023-11-02,," Recent studies have demonstrated the great potential of Large Language Models(LLMs) serving as zero-shot relevance rankers. The typical approach involvesmaking comparisons between pairs or lists of documents. Although effective,these listwise and pairwise methods are not efficient and also heavily rely onintricate prompt engineering. To tackle this problem, we introduce a novelinstruction distillation method. The key idea is to distill the pairwiseranking ability of open-sourced LLMs to a simpler but more efficient pointwiseranking. Specifically, given the same LLM, we first rank documents using theeffective pairwise approach with complex instructions, and then distill theteacher predictions to the pointwise approach with simpler instructions.Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate thatinstruction distillation can improve efficiency by 10 to 100x and also enhancethe ranking performance of LLMs. Furthermore, our approach surpasses theperformance of existing supervised methods like monoT5 and is on par with thestate-of-the-art zero-shot methods. The code to reproduce our results isavailable at www.github.com/sunnweiwei/RankGPT.",,arXiv,"['cs.ir', 'cs.cl']",, indicative summarization of long discussions,"['Shahbaz Syed', 'Dominik Schwabe', 'Khalid Al-Khatib', 'Martin Potthast']",http://arxiv.org/pdf/2311.01882v1.pdf,2023-11-03,," Online forums encourage the exchange and discussion of different stances onmany topics. Not only do they provide an opportunity to present one's ownarguments, but may also gather a broad cross-section of others' arguments.However, the resulting long discussions are difficult to overview. This paperpresents a novel unsupervised approach using large language models (LLMs) togenerating indicative summaries for long discussions that basically serve astables of contents. Our approach first clusters argument sentences, generatescluster labels as abstractive summaries, and classifies the generated clusterlabels into argumentation frames resulting in a two-level summary. Based on anextensively optimized prompt engineering approach, we evaluate 19~LLMs forgenerative cluster labeling and frame classification. To evaluate theusefulness of our indicative summaries, we conduct a purpose-driven user studyvia a new visual interface called Discussion Explorer: It shows that ourproposed indicative summaries serve as a convenient navigation tool to explorelong discussions.",,arXiv,['cs.cl'],, automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models,"['Jake Chanenson', 'Madison Pickering', 'Noah Apthorpe']",http://arxiv.org/pdf/2311.02192v1.pdf,2023-11-03,," Identifying contextual integrity (CI) and governing knowledge commons (GKC)parameters in privacy policy texts can facilitate normative privacy analysis.However, GKC-CI annotation has heretofore required manual or crowdsourcedeffort. This paper demonstrates that high-accuracy GKC-CI parameter annotationof privacy policies can be performed automatically using large language models.We fine-tune 18 open-source and proprietary models on 21,588 GKC-CI annotationsfrom 16 ground truth privacy policies. Our best-performing model (fine-tunedGPT-3.5 Turbo with prompt engineering) has an accuracy of 86%, exceeding theperformance of prior crowdsourcing approaches despite the complexity of privacypolicy texts and the nuance of the GKC-CI annotation task. We apply ourbest-performing model to privacy policies from 164 popular online services,demonstrating the effectiveness of scaling GKC-CI annotation for dataexploration. We make all annotated policies as well as the training data andscripts needed to fine-tune our best-performing model publicly available forfuture research.",,arXiv,"['cs.cy', 'cs.cl', 'cs.lg']",, requirements engineering using generative ai prompts and prompting patterns,"['Krishna Ronanki', 'Beatriz Cabrero-Daniel', 'Jennifer Horkoff', 'Christian Berger']",http://arxiv.org/pdf/2311.03832v1.pdf,2023-11-07,," [Context]: Companies are increasingly recognizing the importance ofautomating Requirements Engineering (RE) tasks due to their resource-intensivenature. The advent of GenAI has made these tasks more amenable to automation,thanks to its ability to understand and interpret context effectively.[Problem]: However, in the context of GenAI, prompt engineering is a criticalfactor for success. Despite this, we currently lack tools and methods tosystematically assess and determine the most effective prompt patterns toemploy for a particular RE task. [Method]: Two tasks related to requirements,specifically requirement classification and tracing, were automated using theGPT-3.5 turbo API. The performance evaluation involved assessing variousprompts created using 5 prompt patterns and implemented programmatically toperform the selected RE tasks, focusing on metrics such as precision, recall,accuracy, and F-Score. [Results]: This paper evaluates the effectiveness of the5 prompt patterns' ability to make GPT-3.5 turbo perform the selected RE tasksand offers recommendations on which prompt pattern to use for a specific REtask. Additionally, it also provides an evaluation framework as a reference forresearchers and practitioners who want to evaluate different prompt patternsfor different RE tasks.",,arXiv,['cs.se'],, actionclip a new paradigm for video action recognition,"['Mengmeng Wang', 'Jiazheng Xing', 'Yong Liu']",http://arxiv.org/pdf/2109.08472v1.pdf,2021-09-17,," The canonical approach to video action recognition dictates a neural model todo a classic and standard 1-of-N majority vote task. They are trained topredict a fixed set of predefined categories, limiting their transferableability on new datasets with unseen concepts. In this paper, we provide a newperspective on action recognition by attaching importance to the semanticinformation of label texts rather than simply mapping them into numbers.Specifically, we model this task as a video-text matching problem within amultimodal learning framework, which strengthens the video representation withmore semantic language supervision and enables our model to do zero-shot actionrecognition without any further labeled data or parameters requirements.Moreover, to handle the deficiency of label texts and make use of tremendousweb data, we propose a new paradigm based on this multimodal learning frameworkfor action recognition, which we dub ""pre-train, prompt and fine-tune"". Thisparadigm first learns powerful representations from pre-training on a largeamount of web image-text or video-text data. Then it makes the actionrecognition task to act more like pre-training problems via prompt engineering.Finally, it end-to-end fine-tunes on target datasets to obtain strongperformance. We give an instantiation of the new paradigm, ActionCLIP, whichnot only has superior and flexible zero-shot/few-shot transfer ability but alsoreaches a top performance on general action recognition task, achieving 83.8%top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone. Code isavailable at https://github.com/sallymmx/ActionCLIP.git",,arXiv,['cs.cv'],, learning to prompt for openvocabulary object detection with visionlanguage model,"['Yu Du', 'Fangyun Wei', 'Zihe Zhang', 'Miaojing Shi', 'Yue Gao', 'Guoqi Li']",http://arxiv.org/pdf/2203.14940v1.pdf,2022-03-28,," Recently, vision-language pre-training shows great potential inopen-vocabulary object detection, where detectors trained on base classes aredevised for detecting new classes. The class text embedding is firstlygenerated by feeding prompts to the text encoder of a pre-trainedvision-language model. It is then used as the region classifier to supervisethe training of a detector. The key element that leads to the success of thismodel is the proper prompt, which requires careful words tuning and ingeniousdesign. To avoid laborious prompt engineering, there are some promptrepresentation learning methods being proposed for the image classificationtask, which however can only be sub-optimal solutions when applied to thedetection task. In this paper, we introduce a novel method, detection prompt(DetPro), to learn continuous prompt representations for open-vocabulary objectdetection based on the pre-trained vision-language model. Different from theprevious classification-oriented methods, DetPro has two highlights: 1) abackground interpretation scheme to include the proposals in image backgroundinto the prompt training; 2) a context grading scheme to separate proposals inimage foreground for tailored prompt training. We assemble DetPro with ViLD, arecent state-of-the-art open-world object detector, and conduct experiments onthe LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365datasets. Experimental results show that our DetPro outperforms the baselineViLD in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on thenovel classes of LVIS. Code and models are available athttps://github.com/dyabel/detpro.",,arXiv,['cs.cv'],, no token left behind explainabilityaided image classification and generation,"['Roni Paiss', 'Hila Chefer', 'Lior Wolf']",http://arxiv.org/pdf/2204.04908v2.pdf,2022-04-11,," The application of zero-shot learning in computer vision has beenrevolutionized by the use of image-text matching models. The most notableexample, CLIP, has been widely used for both zero-shot classification andguiding generative models with a text prompt. However, the zero-shot use ofCLIP is unstable with respect to the phrasing of the input text, making itnecessary to carefully engineer the prompts used. We find that this instabilitystems from a selective similarity score, which is based only on a subset of thesemantically meaningful input tokens. To mitigate it, we present a novelexplainability-based approach, which adds a loss term to ensure that CLIPfocuses on all relevant semantic parts of the input, in addition to employingthe CLIP similarity loss used in previous works. When applied to one-shotclassification through prompt engineering, our method yields an improvement inthe recognition rate, without additional training or fine-tuning. Additionally,we show that CLIP guidance of generative models using our method significantlyimproves the generated images. Finally, we demonstrate a novel use of CLIPguidance for text-based image generation with spatial conditioning on objectlocation, by requiring the image explainability heatmap for each object to beconfined to a pre-determined bounding box.",,arXiv,['cs.cv'],, on measuring social biases in promptbased multitask learning,"['Afra Feyza Akyürek', 'Sejin Paik', 'Muhammed Yusuf Kocyigit', 'Seda Akbiyik', 'Şerife Leman Runyun', 'Derry Wijaya']",http://arxiv.org/pdf/2205.11605v1.pdf,2022-05-23,," Large language models trained on a mixture of NLP tasks that are convertedinto a text-to-text format using prompts, can generalize into novel forms oflanguage and handle novel tasks. A large body of work within prompt engineeringattempts to understand the effects of input forms and prompts in achievingsuperior performance. We consider an alternative measure and inquire whetherthe way in which an input is encoded affects social biases promoted in outputs.In this paper, we study T0, a large-scale multi-task text-to-text languagemodel trained using prompt-based learning. We consider two different forms ofsemantically equivalent inputs: question-answer format and premise-hypothesisformat. We use an existing bias benchmark for the former BBQ and create thefirst bias benchmark in natural language inference BBNLI with hand-writtenhypotheses while also converting each benchmark into the other form. Theresults on two benchmarks suggest that given two different formulations ofessentially the same input, T0 conspicuously acts more biased in questionanswering form, which is seen during training, compared to premise-hypothesisform which is unlike its training examples. Code and data are released underhttps://github.com/feyzaakyurek/bbnli.",,arXiv,"['cs.cl', 'cs.cy']",, ordinalclip learning rank prompts for languageguided ordinal regression,"['Wanhua Li', 'Xiaoke Huang', 'Zheng Zhu', 'Yansong Tang', 'Xiu Li', 'Jie Zhou', 'Jiwen Lu']",http://arxiv.org/pdf/2206.02338v2.pdf,2022-06-06,," This paper presents a language-powered paradigm for ordinal regression.Existing methods usually treat each rank as a category and employ a set ofweights to learn these concepts. These methods are easy to overfit and usuallyattain unsatisfactory performance as the learned concepts are mainly derivedfrom the training set. Recent large pre-trained vision-language models likeCLIP have shown impressive performance on various visual tasks. In this paper,we propose to learn the rank concepts from the rich semantic CLIP latent space.Specifically, we reformulate this task as an image-language matching problemwith a contrastive objective, which regards labels as text and obtains alanguage prototype from a text encoder for each rank. While prompt engineeringfor CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiableprompting method for adapting CLIP for ordinal regression. OrdinalCLIP consistsof learnable context tokens and learnable rank embeddings; The learnable rankembeddings are constructed by explicitly modeling numerical continuity,resulting in well-ordered, compact language prototypes in the CLIP space. Oncelearned, we can only save the language prototypes and discard the huge languagemodel, resulting in zero additional computational overhead compared with thelinear head counterpart. Experimental results show that our paradigm achievescompetitive performance in general ordinal regression tasks, and gainsimprovements in few-shot and distribution shift settings for age estimation.The code is available at https://github.com/xk-huang/OrdinalCLIP.",,arXiv,['cs.cv'],, unsupervised hashing with semantic concept mining,"['Rong-Cheng Tu', 'Xian-Ling Mao', 'Kevin Qinghong Lin', 'Chengfei Cai', 'Weize Qin', 'Hongfa Wang', 'Wei Wei', 'Heyan Huang']",http://arxiv.org/pdf/2209.11475v1.pdf,2022-09-23,," Recently, to improve the unsupervised image retrieval performance, plenty ofunsupervised hashing methods have been proposed by designing a semanticsimilarity matrix, which is based on the similarities between image featuresextracted by a pre-trained CNN model. However, most of these methods tend toignore high-level abstract semantic concepts contained in images. Intuitively,concepts play an important role in calculating the similarity among images. Inreal-world scenarios, each image is associated with some concepts, and thesimilarity between two images will be larger if they share more identicalconcepts. Inspired by the above intuition, in this work, we propose a novelUnsupervised Hashing with Semantic Concept Mining, called UHSCM, whichleverages a VLP model to construct a high-quality similarity matrix.Specifically, a set of randomly chosen concepts is first collected. Then, byemploying a vision-language pretraining (VLP) model with the prompt engineeringwhich has shown strong power in visual representation learning, the set ofconcepts is denoised according to the training images. Next, the proposedmethod UHSCM applies the VLP model with prompting again to mine the conceptdistribution of each image and construct a high-quality semantic similaritymatrix based on the mined concept distributions. Finally, with the semanticsimilarity matrix as guiding information, a novel hashing loss with a modifiedcontrastive loss based regularization item is proposed to optimize the hashingnetwork. Extensive experiments on three benchmark datasets show that theproposed method outperforms the state-of-the-art baselines in the imageretrieval task.",,arXiv,"['cs.cv', 'cs.ir']",, "chat2vis generating data visualisations via natural language using chatgpt, codex and gpt3 large language models","['Paula Maddigan', 'Teo Susnjak']",http://arxiv.org/pdf/2302.02094v2.pdf,2023-02-04,," The field of data visualisation has long aimed to devise solutions forgenerating visualisations directly from natural language text. Research inNatural Language Interfaces (NLIs) has contributed towards the development ofsuch techniques. However, the implementation of workable NLIs has always beenchallenging due to the inherent ambiguity of natural language, as well as inconsequence of unclear and poorly written user queries which pose problems forexisting language models in discerning user intent. Instead of pursuing theusual path of developing new iterations of language models, this study uniquelyproposes leveraging the advancements in pre-trained large language models(LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directlyinto code for appropriate visualisations. This paper presents a novel system,Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrateshow, with effective prompt engineering, the complex problem of languageunderstanding can be solved more efficiently, resulting in simpler and moreaccurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMstogether with the proposed prompts offer a reliable approach to renderingvisualisations from natural language queries, even when queries are highlymisspecified and underspecified. This solution also presents a significantreduction in costs for the development of NLI systems, while attaining greatervisualisation inference abilities compared to traditional NLP approaches thatuse hand-crafted grammar rules and tailored models. This study also presentshow LLM prompts can be constructed in a way that preserves data security andprivacy while being generalisable to different datasets. This work compares theperformance of GPT-3, Codex and ChatGPT across a number of case studies andcontrasts the performances with prior studies.",,arXiv,['cs.hc'],, prompt stealing attacks against texttoimage generation models,"['Xinyue Shen', 'Yiting Qu', 'Michael Backes', 'Yang Zhang']",http://arxiv.org/pdf/2302.09923v1.pdf,2023-02-20,," Text-to-Image generation models have revolutionized the artwork designprocess and enabled anyone to create high-quality images by entering textdescriptions called prompts. Creating a high-quality prompt that consists of asubject and several modifiers can be time-consuming and costly. In consequence,a trend of trading high-quality prompts on specialized marketplaces hasemerged. In this paper, we propose a novel attack, namely prompt stealingattack, which aims to steal prompts from generated images by text-to-imagegeneration models. Successful prompt stealing attacks direct violate theintellectual property and privacy of prompt engineers and also jeopardize thebusiness model of prompt trading marketplaces. We first perform a large-scaleanalysis on a dataset collected by ourselves and show that a successful promptstealing attack should consider a prompt's subject as well as its modifiers. Wethen propose the first learning-based prompt stealing attack, PromptStealer,and demonstrate its superiority over two baseline methods quantitatively andqualitatively. We also make some initial attempts to defend PromptStealer. Ingeneral, our study uncovers a new attack surface in the ecosystem created bythe popular text-to-image generation models. We hope our results can help tomitigate the threat. To facilitate research in this field, we will share ourdataset and code with the community.",,arXiv,"['cs.cr', 'cs.lg']",, extracting accurate materials data from research papers with conversational language models and prompt engineering,"['Maciej P. Polak', 'Dane Morgan']",http://arxiv.org/pdf/2303.05352v2.pdf,2023-03-07,," There has been a growing effort to replace hand extraction of data fromresearch papers with automated data extraction based on natural languageprocessing, language models, and recently, large language models (LLMs).Although these methods enable efficient extraction of data from large sets ofresearch papers, they require a significant amount of up-front effort,expertise, and coding. In this work we propose the ChatExtract method that canfully automate very accurate data extraction with minimal initial effort andbackground, using an advanced conversational LLM. ChatExtract consists of a setof engineered prompts applied to a conversational LLM that both identifysentences with data, extract that data, and assure the data's correctnessthrough a series of follow-up questions. These follow-up questions largelyovercome known issues with LLMs providing factually inaccurate responses.ChatExtract can be applied with any conversational LLMs and yields very highquality data extraction. In tests on materials data we find precision andrecall both close to 90% from the best conversational LLMs, like ChatGPT-4. Wedemonstrate that the exceptional performance is enabled by the informationretention in a conversational model combined with purposeful redundancy andintroducing uncertainty through follow-up prompts. These results suggest thatapproaches similar to ChatExtract, due to their simplicity, transferability,and accuracy are likely to become powerful tools for data extraction in thenear future. Finally, databases for critical cooling rates of metallic glassesand yield strengths of high entropy alloys are developed using ChatExtract.",,arXiv,"['cs.cl', 'cond-mat.mtrl-sci']",, ten quick tips for harnessing the power of chatgptgpt4 in computational biology,"['Tiago Lubiana', 'Rafael Lopes', 'Pedro Medeiros', 'Juan Carlo Silva', 'Andre Nicolau Aquime Goncalves', 'Vinicius Maracaja-Coutinho', 'Helder I Nakaya']",http://arxiv.org/pdf/2303.16429v1.pdf,2023-03-29,," The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in thescientific community. ChatGPT is a general-purpose chatbot powered by largelanguage models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerousfields, including computational biology. In this article, we offer ten tipsbased on our experience with ChatGPT to assist computational biologists inoptimizing their workflows. We have collected relevant prompts and reviewed thenascent literature in the field, compiling tips we project to remain pertinentfor future ChatGPT and LLM iterations, ranging from code refactoring toscientific writing to prompt engineering. We hope our work will helpbioinformaticians to complement their workflows while staying aware of thevarious implications of using this technology. Additionally, to track new andcreative applications for bioinformatics tools such as ChatGPT, we haveestablished a GitHub repository athttps://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethicaladherence to ChatGPT and other LLMs will increase the efficiency ofcomputational biologists, ultimately advancing the pace of scientific discoveryin the life sciences.",,arXiv,"['q-bio.ot', '92-04']",, pair programming with large language models for sampling and estimation of copulas,['Jan Górecki'],http://arxiv.org/pdf/2303.18116v1.pdf,2023-03-31,," Without writing a single line of code by a human, an example Monte Carlosimulation based application for stochastic dependence modeling with copulas isdeveloped using a state-of-the-art large language model (LLM) fine-tuned forconversations. This includes interaction with ChatGPT in natural language andusing mathematical formalism, which, under careful supervision by ahuman-expert, led to producing a working code in MATLAB, Python and R forsampling from a given copula model, evaluation of the model's density,performing maximum likelihood estimation, optimizing the code for parallelcomputing for CPUs as well as for GPUs, and visualization of the computedresults. In contrast to other emerging studies that assess the accuracy of LLMslike ChatGPT on tasks from a selected area, this work rather investigates wayshow to achieve a successful solution of a standard statistical task in acollaboration of a human-expert and artificial intelligence (AI). Particularly,through careful prompt engineering, we separate successful solutions generatedby ChatGPT from unsuccessful ones, resulting in a comprehensive list of relatedpros and cons. It is demonstrated that if the typical pitfalls are avoided, wecan substantially benefit from collaborating with an AI partner. For example,we show that if ChatGPT is not able to provide a correct solution due to a lackof or incorrect knowledge, the human-expert can feed it with the correctknowledge, e.g., in the form of mathematical theorems and formulas, and make itto apply the gained knowledge in order to provide a solution that is correct.Such ability presents an attractive opportunity to achieve a programmedsolution even for users with rather limited knowledge of programmingtechniques.",,arXiv,"['cs.cl', 'stat.co', '65c60, 68n19, 68t50']",, lowcode llm visual programming over llms,"['Yuzhe Cai', 'Shaoguang Mao', 'Wenshan Wu', 'Zehua Wang', 'Yaobo Liang', 'Tao Ge', 'Chenfei Wu', 'Wang You', 'Ting Song', 'Yan Xia', 'Jonathan Tien', 'Nan Duan']",http://arxiv.org/pdf/2304.08103v2.pdf,2023-04-17,," Effectively utilizing LLMs for complex tasks is challenging, often involvinga time-consuming and uncontrollable prompt engineering process. This paperintroduces a novel human-LLM interaction framework, Low-code LLM. Itincorporates six types of simple low-code visual programming interactions, allsupported by clicking, dragging, or text editing, to achieve more controllableand stable responses. Through visual interaction with a graphical userinterface, users can incorporate their ideas into the workflow without writingtrivial prompts. The proposed Low-code LLM framework consists of a Planning LLMthat designs a structured planning workflow for complex tasks, which can becorrespondingly edited and confirmed by users through low-code visualprogramming operations, and an Executing LLM that generates responses followingthe user-confirmed workflow. We highlight three advantages of the low-code LLM:controllable generation results, user-friendly human-LLM interaction, andbroadly applicable scenarios. We demonstrate its benefits using four typicalapplications. By introducing this approach, we aim to bridge the gap betweenhumans and LLMs, enabling more effective and efficient utilization of LLMs forcomplex tasks. Our system will be soon publicly available at LowCodeLLM.",,arXiv,"['cs.cl', 'cs.hc']",, is chatgpt the ultimate programming assistant how far is it,"['Haoye Tian', 'Weiqi Lu', 'Tsz On Li', 'Xunzhu Tang', 'Shing-Chi Cheung', 'Jacques Klein', 'Tegawendé F. Bissyandé']",http://arxiv.org/pdf/2304.11938v2.pdf,2023-04-24,," Recently, the ChatGPT LLM has received great attention: it can be used as abot for discussing source code, prompting it to suggest changes, providedescriptions or even generate code. Typical demonstrations generally focus onexisting benchmarks, which may have been used in model training (i.e., dataleakage). To assess the feasibility of using an LLM as a useful assistant botfor programmers, we must assess its realistic capabilities on unseen problemsas well as its capabilities on various tasks. In this paper, we present anempirical study of ChatGPT's potential as a fully automated programmingassistant, focusing on the tasks of code generation, program repair, and codesummariziation. The study investigates ChatGPT's performance on commonprogramming problems and compares it with state-of-the-art approaches on twobenchmarks. Among several findings, our study shows that ChatGPT is effectivein dealing with common programming problems. However, our experiments alsoreveal limitations in terms of its attention span: detailed descriptions willconstrain the focus of ChatGPT and prevent it from leveraging its vastknowledge to solve the actual problem. Surprisingly, we have identified theability of ChatGPT to reason the original intention of the code. We expectfuture work to build on this insight for dealing with the open question of theoracle problem. Our findings contribute interesting insights to the developmentof LLMs for programming assistance, notably by demonstrating the importance ofprompt engineering, and providing a better understanding of ChatGPT's practicalapplications for software engineering.",,arXiv,"['cs.se', 'cs.ai']",, framing the newsfrom human perception to large language model inferences,"['David Alonso del Barrio', 'Daniel Gatica-Perez']",http://arxiv.org/pdf/2304.14456v1.pdf,2023-04-27,," Identifying the frames of news is important to understand the articles'vision, intention, message to be conveyed, and which aspects of the news areemphasized. Framing is a widely studied concept in journalism, and has emergedas a new topic in computing, with the potential to automate processes andfacilitate the work of journalism professionals. In this paper, we study thisissue with articles related to the Covid-19 anti-vaccine movement. First, tounderstand the perspectives used to treat this theme, we developed a protocolfor human labeling of frames for 1786 headlines of No-Vax movement articles ofEuropean newspapers from 5 countries. Headlines are key units in the writtenpress, and worth of analysis as many people only read headlines (or use them toguide their decision for further reading.) Second, considering advances inNatural Language Processing (NLP) with large language models, we investigatedtwo approaches for frame inference of news headlines: first with a GPT-3.5fine-tuning approach, and second with GPT-3.5 prompt-engineering. Our workcontributes to the study and analysis of the performance that these models haveto facilitate journalistic tasks like classification of frames, whileunderstanding whether the models are able to replicate human perception in theidentification of these frames.",,arXiv,"['cs.cl', 'cs.hc']",, sensitivity and robustness of large language models to prompt template in japanese text classification tasks,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2305.08714v2.pdf,2023-05-15,," Prompt engineering relevance research has seen a notable surge in recentyears, primarily driven by advancements in pre-trained language models andlarge language models. However, a critical issue has been identified withinthis domain: the inadequate of sensitivity and robustness of these modelstowards Prompt Templates, particularly in lesser-studied languages such asJapanese. This paper explores this issue through a comprehensive evaluation ofseveral representative Large Language Models (LLMs) and a widely-utilizedpre-trained model(PLM). These models are scrutinized using a benchmark datasetin Japanese, with the aim to assess and analyze the performance of the currentmultilingual models in this context. Our experimental results reveal startlingdiscrepancies. A simple modification in the sentence structure of the PromptTemplate led to a drastic drop in the accuracy of GPT-4 from 49.21 to 25.44.This observation underscores the fact that even the highly performance GPT-4model encounters significant stability issues when dealing with diverseJapanese prompt templates, rendering the consistency of the model's outputresults questionable. In light of these findings, we conclude by proposingpotential research trajectories to further enhance the development andperformance of Large Language Models in their current stage.",,arXiv,"['cs.cl', 'cs.ai']",, making language models better tool learners with execution feedback,"['Shuofei Qiao', 'Honghao Gui', 'Chengfei Lv', 'Qianghuai Jia', 'Huajun Chen', 'Ningyu Zhang']",http://arxiv.org/pdf/2305.13068v2.pdf,2023-05-22,," Tools serve as pivotal interfaces that enable humans to understand andreshape the environment. With the advent of foundation models, AI systems canutilize tools to expand their capabilities and interact with the real world.Existing tool learning methodologies, encompassing supervised fine-tuning andprompt engineering approaches, often induce large language models to utilizetools indiscriminately, as complex tasks often exceed their own competencies.However, introducing tools for simple tasks, which the models themselves canreadily resolve, can inadvertently propagate errors rather than enhanceperformance. This leads to the research question: can we teach language modelswhen and how to use tools? To meet this need, we propose Tool leaRning wIthexeCution fEedback (TRICE), a two-stage end-to-end framework that enables themodel to continually learn through feedback derived from tool execution,thereby learning when and how to use tools effectively. Experimental results,backed by further analysis, show that TRICE can make the large language modelselectively use tools by improving the accuracy of tool usage while enhancinginsufficient tool learning and mitigating excessive reliance on tools. Code anddatasets are available in https://github.com/zjunlp/trice.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ir', 'cs.lg']",, game of tones faculty detection of gpt4 generated content in university assessments,"['Mike Perkins', 'Jasper Roe', 'Darius Postma', 'James McGaughran', 'Don Hickerson']",http://arxiv.org/pdf/2305.18081v1.pdf,2023-05-29,," This study explores the robustness of university assessments against the useof Open AI's Generative Pre-Trained Transformer 4 (GPT-4) generated content andevaluates the ability of academic staff to detect its use when supported by theTurnitin Artificial Intelligence (AI) detection tool. The research involvedtwenty-two GPT-4 generated submissions being created and included in theassessment process to be marked by fifteen different faculty members. The studyreveals that although the detection tool identified 91% of the experimentalsubmissions as containing some AI-generated content, the total detected contentwas only 54.8%. This suggests that the use of adversarial techniques regardingprompt engineering is an effective method in evading AI detection tools andhighlights that improvements to AI detection software are needed. Using theTurnitin AI detect tool, faculty reported 54.5% of the experimental submissionsto the academic misconduct process, suggesting the need for increased awarenessand training into these tools. Genuine submissions received a mean score of54.4, whereas AI-generated content scored 52.3, indicating the comparableperformance of GPT-4 in real-life situations. Recommendations include adjustingassessment strategies to make them more resistant to the use of AI tools, usingAI-inclusive assessment where possible, and providing comprehensive trainingprograms for faculty and students. This research contributes to understandingthe relationship between AI-generated content and academic assessment, urgingfurther investigation to preserve academic integrity.",,arXiv,"['cs.cy', 'cs.ai', 'k.4']",, a survey on segment anything model (sam) vision foundation model meets prompt engineering,"['Chaoning Zhang', 'Fachrina Dewi Puspitasari', 'Sheng Zheng', 'Chenghao Li', 'Yu Qiao', 'Taegoo Kang', 'Xinru Shan', 'Chenshuang Zhang', 'Caiyan Qin', 'Francois Rameau', 'Lik-Hang Lee', 'Sung-Ho Bae', 'Choong Seon Hong']",http://arxiv.org/pdf/2306.06211v3.pdf,2023-05-12,," Segment anything model (SAM) developed by Meta AI Research has recentlyattracted significant attention. Trained on a large segmentation dataset ofover 1 billion masks, SAM is capable of segmenting any object on a certainimage. In the original SAM work, the authors turned to zero-short transfertasks (like edge detection) for evaluating the performance of SAM. Recently,numerous works have attempted to investigate the performance of SAM in variousscenarios to recognize and segment objects. Moreover, numerous projects haveemerged to show the versatility of SAM as a foundation model by combining itwith other models, like Grounding DINO, Stable Diffusion, ChatGPT, etc. Withthe relevant papers and projects increasing exponentially, it is challengingfor the readers to catch up with the development of SAM. To this end, this workconducts the first yet comprehensive survey on SAM. This is an ongoing projectand we intend to update the manuscript on a regular basis. Therefore, readersare welcome to contact us if they complete new works related to SAM so that wecan include them in our next version.",,arXiv,['cs.cv'],, the economic tradeoffs of large language models a case study,"['Kristen Howell', 'Gwen Christian', 'Pavel Fomitchov', 'Gitit Kehat', 'Julianne Marzulla', 'Leanne Rolston', 'Jadin Tredup', 'Ilana Zimmerman', 'Ethan Selfridge', 'Joseph Bradley']",http://arxiv.org/pdf/2306.07402v1.pdf,2023-06-08,," Contacting customer service via chat is a common practice. Because employingcustomer service agents is expensive, many companies are turning to NLP thatassists human agents by auto-generating responses that can be used directly orwith modifications. Large Language Models (LLMs) are a natural fit for this usecase; however, their efficacy must be balanced with the cost of training andserving them. This paper assesses the practical cost and impact of LLMs for theenterprise as a function of the usefulness of the responses that they generate.We present a cost framework for evaluating an NLP model's utility for this usecase and apply it to a single brand as a case study in the context of anexisting agent assistance product. We compare three strategies for specializingan LLM - prompt engineering, fine-tuning, and knowledge distillation - usingfeedback from the brand's customer service agents. We find that the usabilityof a model's responses can make up for a large difference in inference cost forour case study brand, and we extrapolate our findings to the broader enterprisespace.",,arXiv,"['cs.cl', 'cs.ai']",, exploring the effectiveness of dataset synthesis an application of apple detection in orchards,"['Alexander van Meekeren', 'Maya Aghaei', 'Klaas Dijkstra']",http://arxiv.org/pdf/2306.11763v1.pdf,2023-06-20,," Deep object detection models have achieved notable successes in recent years,but one major obstacle remains: the requirement for a large amount of trainingdata. Obtaining such data is a tedious process and is mainly time consuming,leading to the exploration of new research avenues like synthetic datageneration techniques. In this study, we explore the usability of StableDiffusion 2.1-base for generating synthetic datasets of apple trees for objectdetection and compare it to a baseline model trained on real-world data. Aftercreating a dataset of realistic apple trees with prompt engineering andutilizing a previously trained Stable Diffusion model, the custom dataset wasannotated and evaluated by training a YOLOv5m object detection model to predictapples in a real-world apple detection dataset. YOLOv5m was chosen for itsrapid inference time and minimal hardware demands. Results demonstrate that themodel trained on generated data is slightly underperforming compared to abaseline model trained on real-world images when evaluated on a set ofreal-world images. However, these findings remain highly promising, as theaverage precision difference is only 0.09 and 0.06, respectively. Qualitativeresults indicate that the model can accurately predict the location of apples,except in cases of heavy shading. These findings illustrate the potential ofsynthetic data generation techniques as a viable alternative to the collectionof extensive training data for object detection models.",,arXiv,['cs.cv'],, do you still need a manual smart contract audit,"['Isaac David', 'Liyi Zhou', 'Kaihua Qin', 'Dawn Song', 'Lorenzo Cavallaro', 'Arthur Gervais']",http://arxiv.org/pdf/2306.12338v2.pdf,2023-06-21,," We investigate the feasibility of employing large language models (LLMs) forconducting the security audit of smart contracts, a traditionallytime-consuming and costly process. Our research focuses on the optimization ofprompt engineering for enhanced security analysis, and we evaluate theperformance and accuracy of LLMs using a benchmark dataset comprising 52Decentralized Finance (DeFi) smart contracts that have previously beencompromised. Our findings reveal that, when applied to vulnerable contracts, both GPT-4and Claude models correctly identify the vulnerability type in 40% of thecases. However, these models also demonstrate a high false positive rate,necessitating continued involvement from manual auditors. The LLMs testedoutperform a random model by 20% in terms of F1-score. To ensure the integrity of our study, we conduct mutation testing on fivenewly developed and ostensibly secure smart contracts, into which we manuallyinsert two and 15 vulnerabilities each. This testing yielded a remarkablebest-case 78.7% true positive rate for the GPT-4-32k model. We tested both,asking the models to perform a binary classification on whether a contract isvulnerable, and a non-binary prompt. We also examined the influence of modeltemperature variations and context length on the LLM's performance. Despite the potential for many further enhancements, this work lays thegroundwork for a more efficient and economical approach to smart contractsecurity audits.",,arXiv,['cs.cr'],, comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues,"['Dollaya Hirunyasiri', 'Danielle R. Thomas', 'Jionghao Lin', 'Kenneth R. Koedinger', 'Vincent Aleven']",http://arxiv.org/pdf/2307.02018v1.pdf,2023-07-05,," Research suggests that providing specific and timely feedback to human tutorsenhances their performance. However, it presents challenges due to thetime-consuming nature of assessing tutor performance by human evaluators. Largelanguage models, such as the AI-chatbot ChatGPT, hold potential for offeringconstructive feedback to tutors in practical settings. Nevertheless, theaccuracy of AI-generated feedback remains uncertain, with scant researchinvestigating the ability of models like ChatGPT to deliver effective feedback.In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in atutor-student setting. We use two different prompting approaches, the zero-shotchain of thought and the few-shot chain of thought, to identify specificcomponents of effective praise based on five criteria. These approaches arethen compared to the results of human graders for accuracy. Our goal is toassess the extent to which GPT-4 can accurately identify each praise criterion.We found that both zero-shot and few-shot chain of thought approaches yieldcomparable results. GPT-4 performs moderately well in identifying instanceswhen the tutor offers specific and immediate praise. However, GPT-4underperforms in identifying the tutor's ability to deliver sincere praise,particularly in the zero-shot prompting scenario where examples of sinceretutor praise statements were not provided. Future work will focus on enhancingprompt engineering, developing a more general tutoring rubric, and evaluatingour method using real-life tutoring dialogues.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, "right to be forgotten in the era of large language models implications, challenges, and solutions","['Dawen Zhang', 'Pamela Finckenberg-Broman', 'Thong Hoang', 'Shidong Pan', 'Zhenchang Xing', 'Mark Staples', 'Xiwei Xu']",http://arxiv.org/pdf/2307.03941v3.pdf,2023-07-08,," The Right to be Forgotten (RTBF) was first established as the result of theruling of Google Spain SL, Google Inc. v AEPD, Mario Costeja Gonz\'alez, andwas later included as the Right to Erasure under the General Data ProtectionRegulation (GDPR) of European Union to allow individuals the right to requestpersonal data be deleted by organizations. Specifically for search engines,individuals can send requests to organizations to exclude their informationfrom the query results. It was a significant emergent right as the result ofthe evolution of technology. With the recent development of Large LanguageModels (LLMs) and their use in chatbots, LLM-enabled software systems havebecome popular. But they are not excluded from the RTBF. Compared with theindexing approach used by search engines, LLMs store, and process informationin a completely different way. This poses new challenges for compliance withthe RTBF. In this paper, we explore these challenges and provide our insightson how to implement technical solutions for the RTBF, including the use ofdifferential privacy, machine unlearning, model editing, and promptengineering. With the rapid advancement of AI and the increasing need ofregulating this powerful technology, learning from the case of RTBF can providevaluable lessons for technical practitioners, legal experts, organizations, andauthorities.",,arXiv,"['cs.cy', 'cs.ai', 'cs.cl']",, gpt3 models are fewshot financial reasoners,"['Raul Salles de Padua', 'Imran Qureshi', 'Mustafa U. Karakaplan']",http://arxiv.org/pdf/2307.13617v2.pdf,2023-07-25,," Financial analysis is an important tool for evaluating company performance.Practitioners work to answer financial questions to make profitable investmentdecisions, and use advanced quantitative analyses to do so. As a result,Financial Question Answering (QA) is a question answering task that requiresdeep reasoning about numbers. Furthermore, it is unknown how well pre-trainedlanguage models can reason in the financial domain. The currentstate-of-the-art requires a retriever to collect relevant facts about thefinancial question from the text and a generator to produce a valid financialprogram and a final answer. However, recently large language models like GPT-3have achieved state-of-the-art performance on wide variety of tasks with just afew shot examples. We run several experiments with GPT-3 and find that aseparate retrieval model and logic engine continue to be essential componentsto achieving SOTA performance in this task, particularly due to the precisenature of financial questions and the complex information stored in financialdocuments. With this understanding, our refined prompt-engineering approach onGPT-3 achieves near SOTA accuracy without any fine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, evaluating chatgpt textmining of clinical records for obesity monitoring,"['Ivo S. Fins', 'Heather Davies', 'Sean Farrell', 'Jose R. Torres', 'Gina Pinchbeck', 'Alan D. Radford', 'Peter-John Noble']",http://arxiv.org/pdf/2308.01666v1.pdf,2023-08-03,," Background: Veterinary clinical narratives remain a largely untapped resourcefor addressing complex diseases. Here we compare the ability of a largelanguage model (ChatGPT) and a previously developed regular expression (RegexT)to identify overweight body condition scores (BCS) in veterinary narratives.Methods: BCS values were extracted from 4,415 anonymised clinical narrativesusing either RegexT or by appending the narrative to a prompt sent to ChatGPTcoercing the model to return the BCS information. Data were manually reviewedfor comparison. Results: The precision of RegexT was higher (100%, 95% CI94.81-100%) than the ChatGPT (89.3%; 95% CI82.75-93.64%). However, the recallof ChatGPT (100%. 95% CI 96.18-100%) was considerably higher than that ofRegexT (72.6%, 95% CI 63.92-79.94%). Limitations: Subtle prompt engineering isneeded to improve ChatGPT output. Conclusions: Large language models creatediverse opportunities and, whilst complex, present an intuitive interface toinformation but require careful implementation to avoid unpredictable errors.",,arXiv,"['cs.ir', 'cs.cl']",, large language models in fault localisation,"['Yonghao Wu', 'Zheng Li', 'Jie M. Zhang', 'Mike Papadakis', 'Mark Harman', 'Yong Liu']",http://arxiv.org/pdf/2308.15276v3.pdf,2023-08-29,," Large Language Models (LLMs) have shown promise in multiple softwareengineering tasks including code generation, program repair, codesummarisation, and test generation. Fault localisation is instrumental inenabling automated debugging and repair of programs and was prominentlyfeatured as a highlight during the launch event of ChatGPT-4. Nevertheless, theperformance of LLMs compared to state-of-the-art methods, as well as the impactof prompt design and context length on their efficacy, remains unclear. To fillthis gap, this paper presents an in-depth investigation into the capability ofChatGPT-3.5 and ChatGPT-4, the two state-of-the-art LLMs, on faultlocalisation. Using the widely-adopted large-scale Defects4J dataset, wecompare the two LLMs with the existing fault localisation techniques. We alsoinvestigate the consistency of LLMs in fault localisation, as well as howprompt engineering and the length of code context affect the fault localisationeffectiveness. Our findings demonstrate that within function-level context, ChatGPT-4outperforms all the existing fault localisation methods. Additional error logscan further improve ChatGPT models' localisation accuracy and consistency, withan average 46.9% higher accuracy over the state-of-the-art baseline SmartFL onthe Defects4J dataset in terms of TOP-1 metric. However, when the code contextof the Defects4J dataset expands to the class-level, ChatGPT-4's performancesuffers a significant drop, with 49.9% lower accuracy than SmartFL under TOP-1metric. These observations indicate that although ChatGPT can effectivelylocalise faults under specific conditions, limitations are evident. Furtherresearch is needed to fully harness the potential of LLMs like ChatGPT forpractical fault localisation applications.",,arXiv,['cs.se'],, is gpt4 a good trader,['Bingzhe Wu'],http://arxiv.org/pdf/2309.10982v1.pdf,2023-09-20,," Recently, large language models (LLMs), particularly GPT-4, have demonstratedsignificant capabilities in various planning and reasoning tasks\cite{cheng2023gpt4,bubeck2023sparks}. Motivated by these advancements, therehas been a surge of interest among researchers to harness the capabilities ofGPT-4 for the automated design of quantitative factors that do not overlap withexisting factor libraries, with an aspiration to achieve alpha returns\cite{webpagequant}. In contrast to these work, this study aims to examine thefidelity of GPT-4's comprehension of classic trading theories and itsproficiency in applying its code interpreter abilities to real-world tradingdata analysis. Such an exploration is instrumental in discerning whether theunderlying logic GPT-4 employs for trading is intrinsically reliable.Furthermore, given the acknowledged interpretative latitude inherent in mosttrading theories, we seek to distill more precise methodologies of deployingthese theories from GPT-4's analytical process, potentially offering invaluableinsights to human traders. To achieve this objective, we selected daily candlestick (K-line) data fromspecific periods for certain assets, such as the Shanghai Stock Index. Throughmeticulous prompt engineering, we guided GPT-4 to analyze the technicalstructures embedded within this data, based on specific theories like theElliott Wave Theory. We then subjected its analytical output to manualevaluation, assessing its interpretative depth and accuracy vis-\`a-vis thesetrading theories from multiple dimensions. The results and findings from thisstudy could pave the way for a synergistic amalgamation of human expertise andAI-driven insights in the realm of trading.",,arXiv,['cs.ai'],, batch calibration rethinking calibration for incontext learning and prompt engineering,"['Han Zhou', 'Xingchen Wan', 'Lev Proleev', 'Diana Mincu', 'Jilin Chen', 'Katherine Heller', 'Subhrajit Roy']",http://arxiv.org/pdf/2309.17249v2.pdf,2023-09-29,," Prompting and in-context learning (ICL) have become efficient learningparadigms for large language models (LLMs). However, LLMs suffer from promptbrittleness and various bias factors in the prompt, including but not limitedto the formatting, the choice verbalizers, and the ICL examples. To addressthis problem that results in unexpected performance degradation, calibrationmethods have been developed to mitigate the effects of these biases whilerecovering LLM performance. In this work, we first conduct a systematicanalysis of the existing calibration methods, where we both provide a unifiedview and reveal the failure cases. Inspired by these analyses, we propose BatchCalibration (BC), a simple yet intuitive method that controls the contextualbias from the batched input, unifies various prior approaches, and effectivelyaddresses the aforementioned issues. BC is zero-shot, inference-only, andincurs negligible additional costs. In the few-shot setup, we further extend BCto allow it to learn the contextual bias from labeled data. We validate theeffectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstratestate-of-the-art performance over previous calibration baselines across morethan 10 natural language understanding and image classification tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, suspicionagent playing imperfect information games with theory of mind aware gpt4,"['Jiaxian Guo', 'Bo Yang', 'Paul Yoo', 'Bill Yuchen Lin', 'Yusuke Iwasawa', 'Yutaka Matsuo']",http://arxiv.org/pdf/2309.17277v2.pdf,2023-09-29,," Unlike perfect information games, where all elements are known to everyplayer, imperfect information games emulate the real-world complexities ofdecision-making under uncertain or incomplete information. GPT-4, the recentbreakthrough in large language models (LLMs) trained on massive passive data,is notable for its knowledge retrieval and reasoning abilities. This paperdelves into the applicability of GPT-4's learned knowledge for imperfectinformation games. To achieve this, we introduce \textbf{Suspicion-Agent}, aninnovative agent that leverages GPT-4's capabilities for performing inimperfect information games. With proper prompt engineering to achievedifferent functions, Suspicion-Agent based on GPT-4 demonstrates remarkableadaptability across a range of imperfect information card games. Importantly,GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning itcan understand others and intentionally impact others' behavior. Leveragingthis, we design a planning strategy that enables GPT-4 to competently playagainst different opponents, adapting its gameplay style as needed, whilerequiring only the game rules and descriptions of observations as input. In theexperiments, we qualitatively showcase the capabilities of Suspicion-Agentacross three different imperfect information games and then quantitativelyevaluate it in Leduc Hold'em. The results show that Suspicion-Agent canpotentially outperform traditional algorithms designed for imperfectinformation games, without any specialized training or examples. In order toencourage and foster deeper insights within the community, we make ourgame-related data publicly available.",,arXiv,['cs.ai'],, investigating the limitation of clip models the worstperforming categories,"['Jie-Jing Shao', 'Jiang-Xin Shi', 'Xiao-Wen Yang', 'Lan-Zhe Guo', 'Yu-Feng Li']",http://arxiv.org/pdf/2310.03324v1.pdf,2023-10-05,," Contrastive Language-Image Pre-training (CLIP) provides a foundation model byintegrating natural language into visual concepts, enabling zero-shotrecognition on downstream tasks. It is usually expected that satisfactoryoverall accuracy can be achieved across numerous domains through well-designedtextual prompts. However, we found that their performance in the worstcategories is significantly inferior to the overall performance. For example,on ImageNet, there are a total of 10 categories with class-wise accuracy as lowas 0\%, even though the overall performance has achieved 64.1\%. Thisphenomenon reveals the potential risks associated with using CLIP models,particularly in risk-sensitive applications where specific categories holdsignificant importance. To address this issue, we investigate the alignmentbetween the two modalities in the CLIP model and propose the Class-wiseMatching Margin (\cmm) to measure the inference confusion. \cmm\ caneffectively identify the worst-performing categories and estimate the potentialperformance of the candidate prompts. We further query large language models toenrich descriptions of worst-performing categories and build a weightedensemble to highlight the efficient prompts. Experimental results clearlyverify the effectiveness of our proposal, where the accuracy on the worst-10categories on ImageNet is boosted to 5.2\%, without manual prompt engineering,laborious optimization, or access to labeled validation data.",,arXiv,"['cs.cv', 'cs.lg']",, large language modelempowered agents for simulating macroeconomic activities,"['Nian Li', 'Chen Gao', 'Yong Li', 'Qingmin Liao']",http://arxiv.org/pdf/2310.10436v1.pdf,2023-10-16,," The advent of the Web has brought about a paradigm shift in traditionaleconomics, particularly in the digital economy era, enabling the preciserecording and analysis of individual economic behavior. This has led to agrowing emphasis on data-driven modeling in macroeconomics. In macroeconomicresearch, Agent-based modeling (ABM) emerged as an alternative, evolvingthrough rule-based agents, machine learning-enhanced decision-making, and, morerecently, advanced AI agents. However, the existing works are suffering fromthree main challenges when endowing agents with human-like decision-making,including agent heterogeneity, the influence of macroeconomic trends, andmultifaceted economic factors. Large language models (LLMs) have recentlygained prominence in offering autonomous human-like characteristics. Therefore,leveraging LLMs in macroeconomic simulation presents an opportunity to overcometraditional limitations. In this work, we take an early step in introducing anovel approach that leverages LLMs in macroeconomic simulation. We designprompt-engineering-driven LLM agents to exhibit human-like decision-making andadaptability in the economic environment, with the abilities of perception,reflection, and decision-making to address the abovementioned challenges.Simulation experiments on macroeconomic activities show that LLM-empoweredagents can make realistic work and consumption decisions and emerge morereasonable macroeconomic phenomena than existing rule-based or AI agents. Ourwork demonstrates the promising potential to simulate macroeconomics based onLLM and its human-like characteristics.",,arXiv,['cs.ai'],, large language model for multiobjective evolutionary optimization,"['Fei Liu', 'Xi Lin', 'Zhenkun Wang', 'Shunyu Yao', 'Xialiang Tong', 'Mingxuan Yuan', 'Qingfu Zhang']",http://arxiv.org/pdf/2310.12541v2.pdf,2023-10-19,," Multiobjective evolutionary algorithms (MOEAs) are major methods for solvingmultiobjective optimization problems (MOPs). Many MOEAs have been proposed inthe past decades, of which the search operators need a carefully handcrafteddesign with domain knowledge. Recently, some attempts have been made to replacethe manually designed operators in MOEAs with learning-based operators (e.g.,neural network models). However, much effort is still required for designingand training such models, and the learned operators might not generalize wellon new problems. To tackle the above challenges, this work investigates a novelapproach that leverages the powerful large language model (LLM) to design MOEAoperators. With proper prompt engineering, we successfully let a general LLMserve as a black-box search operator for decomposition-based MOEA (MOEA/D) in azero-shot manner. In addition, by learning from the LLM behavior, we furtherdesign an explicit white-box operator with randomness and propose a new versionof decomposition-based MOEA, termed MOEA/D-LO. Experimental studies ondifferent test benchmarks show that our proposed method can achieve competitiveperformance with widely used MOEAs. It is also promising to see the operatoronly learned from a few instances can have robust generalization performance onunseen problems with quite different patterns and settings. The results revealthe potential benefits of using pre-trained LLMs in the design of MOEAs.",,arXiv,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.et']",, enhancing zeroshot crypto sentiment with finetuned language model and prompt engineering,"['Rahman S M Wahidur', 'Ishmam Tashdeed', 'Manjit Kaur', ' Heung-No-Lee']",http://arxiv.org/pdf/2310.13226v1.pdf,2023-10-20,," Blockchain technology has revolutionized the financial landscape, withcryptocurrencies gaining widespread adoption for their decentralized andtransparent nature. As the sentiment expressed on social media platforms cansignificantly influence cryptocurrency discussions and market movements,sentiment analysis has emerged as a crucial tool for understanding publicopinion and predicting market trends. Motivated by the aim to enhance sentimentanalysis accuracy in the cryptocurrency domain, this paper investigatesfine-tuning techniques on large language models. This paper also investigatesthe efficacy of supervised fine-tuning and instruction-based fine-tuning onlarge language models for unseen tasks. Experimental results demonstrate asignificant average zero-shot performance gain of 40% after fine-tuning,highlighting the potential of this technique in optimizing pre-trained languagemodel efficiency. Additionally, the impact of instruction tuning on models ofvarying scales is examined, revealing that larger models benefit frominstruction tuning, achieving the highest average accuracy score of 75.16%. Incontrast, smaller-scale models may experience reduced generalization due to thecomplete utilization of model capacity. To gain deeper insight about howinstruction works with these language models, this paper presents anexperimental investigation into the response of an instruction-based modelunder different instruction tuning setups. The investigation demonstrates thatthe model achieves an average accuracy score of 72.38% for short and simpleinstructions. This performance significantly outperforms its accuracy underlong and complex instructions by over 12%, thereby effectively highlighting theprofound significance of instruction characteristics in maximizing modelperformance.",,arXiv,['cs.cl'],, openended instructable embodied agents with memoryaugmented large language models,"['Gabriel Sarch', 'Yue Wu', 'Michael J. Tarr', 'Katerina Fragkiadaki']",http://arxiv.org/pdf/2310.15127v2.pdf,2023-10-23,," Pre-trained and frozen large language models (LLMs) can effectively mapsimple scene rearrangement instructions to programs over a robot's visuomotorfunctions through appropriate few-shot example prompting. To parse open-domainnatural language and adapt to a user's idiosyncratic procedures, not knownduring prompt engineering time, fixed prompts fall short. In this paper, weintroduce HELPER, an embodied agent equipped with an external memory oflanguage-program pairs that parses free-form human-robot dialogue into actionprograms through retrieval-augmented LLM prompting: relevant memories areretrieved based on the current dialogue, instruction, correction, or VLMdescription, and used as in-context prompt examples for LLM querying. Thememory is expanded during deployment to include pairs of user's language andaction plans, to assist future inferences and personalize them to the user'slanguage and routines. HELPER sets a new state-of-the-art in the TEAChbenchmark in both Execution from Dialog History (EDH) and Trajectory fromDialogue (TfD), with a 1.7x improvement over the previous state-of-the-art forTfD. Our models, code, and video results can be found in our project's website:https://helper-agent-llm.github.io.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",, promisepromptdriven 3d medical image segmentation using pretrained image foundation models,"['Hao Li', 'Han Liu', 'Dewei Hu', 'Jiacheng Wang', 'Ipek Oguz']",http://arxiv.org/pdf/2310.19721v3.pdf,2023-10-30,," To address prevalent issues in medical imaging, such as data acquisitionchallenges and label availability, transfer learning from natural to medicalimage domains serves as a viable strategy to produce reliable segmentationresults. However, several existing barriers between domains need to be brokendown, including addressing contrast discrepancies, managing anatomicalvariability, and adapting 2D pretrained models for 3D segmentation tasks. Inthis paper, we propose ProMISe,a prompt-driven 3D medical image segmentationmodel using only a single point prompt to leverage knowledge from a pretrained2D image foundation model. In particular, we use the pretrained visiontransformer from the Segment Anything Model (SAM) and integrate lightweightadapters to extract depth-related (3D) spatial context without updating thepretrained weights. For robust results, a hybrid network with complementaryencoders is designed, and a boundary-aware loss is proposed to achieve preciseboundaries. We evaluate our model on two public datasets for colon and pancreastumor segmentations, respectively. Compared to the state-of-the-artsegmentation methods with and without prompt engineering, our proposed methodachieves superior performance. The code is publicly available athttps://github.com/MedICL-VU/ProMISe.",,arXiv,"['eess.iv', 'cs.cv']",, making large language models better data creators,"['Dong-Ho Lee', 'Jay Pujara', 'Mohit Sewak', 'Ryen W. White', 'Sujay Kumar Jauhar']",http://arxiv.org/pdf/2310.20111v1.pdf,2023-10-31,," Although large language models (LLMs) have advanced the state-of-the-art inNLP significantly, deploying them for downstream applications is stillchallenging due to cost, responsiveness, control, or concerns around privacyand security. As such, trainable models are still the preferred option in somecases. However, these models still require human-labeled data for optimalperformance, which is expensive and time-consuming to obtain. In order toaddress this issue, several techniques to reduce human effort involve labelingor generating data using LLMs. Although these methods are effective for certainapplications, in practice they encounter difficulties in real-world scenarios.Labeling data requires careful data selection, while generating datanecessitates task-specific prompt engineering. In this paper, we propose aunified data creation pipeline that requires only a single formatting example,and which is applicable to a broad range of tasks, including traditionallyproblematic ones with semantically devoid label spaces. In our experiments wedemonstrate that instruction-following LLMs are highly cost-effective datacreators, and that models trained with these data exhibit performance betterthan those trained with human-labeled data (by up to 17.5%) onout-of-distribution evaluation, while maintaining comparable performance onin-distribution tasks. These results have important implications for therobustness of NLP systems deployed in the real-world.",,arXiv,['cs.cl'],, vispercep a visionlanguage approach to enhance visual perception for people with blindness and low vision,"['Yu Hao', 'Fan Yang', 'Hao Huang', 'Shuaihang Yuan', 'Sundeep Rangan', 'John-Ross Rizzo', 'Yao Wang', 'Yi Fang']",http://arxiv.org/pdf/2310.20225v1.pdf,2023-10-31,," People with blindness and low vision (pBLV) encounter substantial challengeswhen it comes to comprehensive scene recognition and precise objectidentification in unfamiliar environments. Additionally, due to the visionloss, pBLV have difficulty in accessing and identifying potential trippinghazards on their own. In this paper, we present a pioneering approach thatleverages a large vision-language model to enhance visual perception for pBLV,offering detailed and comprehensive descriptions of the surroundingenvironments and providing warnings about the potential risks. Our methodbegins by leveraging a large image tagging model (i.e., Recognize Anything(RAM)) to identify all common objects present in the captured images. Therecognition results and user query are then integrated into a prompt, tailoredspecifically for pBLV using prompt engineering. By combining the prompt andinput image, a large vision-language model (i.e., InstructBLIP) generatesdetailed and comprehensive descriptions of the environment and identifiespotential risks in the environment by analyzing the environmental objects andscenes, relevant to the prompt. We evaluate our approach through experimentsconducted on both indoor and outdoor datasets. Our results demonstrate that ourmethod is able to recognize objects accurately and provide insightfuldescriptions and analysis of the environment for pBLV.",,arXiv,"['cs.cv', 'cs.ai']",, can large language models capture public opinion about global warming an empirical assessment of algorithmic fidelity and bias,"['S. Lee', 'T. Q. Peng', 'M. H. Goldberg', 'S. A. Rosenthal', 'J. E. Kotcher', 'E. W. Maibach', 'A. Leiserowitz']",http://arxiv.org/pdf/2311.00217v2.pdf,2023-11-01,," Large language models (LLMs) have demonstrated their potential in socialscience research by emulating human perceptions and behaviors, a conceptreferred to as algorithmic fidelity. This study assesses the algorithmicfidelity and bias of LLMs by utilizing two nationally representative climatechange surveys. The LLMs were conditioned on demographics and/or psychologicalcovariates to simulate survey responses. The findings indicate that LLMs caneffectively capture presidential voting behaviors but encounter challenges inaccurately representing global warming perspectives when relevant covariatesare not included. GPT-4 exhibits improved performance when conditioned on bothdemographics and covariates. However, disparities emerge in LLM estimations ofthe views of certain groups, with LLMs tending to underestimate worry aboutglobal warming among Black Americans. While highlighting the potential of LLMsto aid social science research, these results underscore the importance ofmeticulous conditioning, model selection, survey question format, and biasassessment when employing LLMs for survey simulation. Further investigationinto prompt engineering and algorithm auditing is essential to harness thepower of LLMs while addressing their inherent limitations.",,arXiv,"['cs.ai', 'cs.cy']",, bigbio a framework for datacentric biomedical natural language processing,"['Jason Alan Fries', 'Leon Weber', 'Natasha Seelam', 'Gabriel Altay', 'Debajyoti Datta', 'Samuele Garda', 'Myungsun Kang', 'Ruisi Su', 'Wojciech Kusa', 'Samuel Cahyawijaya', 'Fabio Barth', 'Simon Ott', 'Matthias Samwald', 'Stephen Bach', 'Stella Biderman', 'Mario Sänger', 'Bo Wang', 'Alison Callahan', 'Daniel León Periñán', 'Théo Gigant', 'Patrick Haller', 'Jenny Chim', 'Jose David Posada', 'John Michael Giorgi', 'Karthik Rangasai Sivaraman', 'Marc Pàmies', 'Marianna Nezhurina', 'Robert Martin', 'Michael Cullan', 'Moritz Freidank', 'Nathan Dahlberg', 'Shubhanshu Mishra', 'Shamik Bose', 'Nicholas Michio Broad', 'Yanis Labrak', 'Shlok S Deshmukh', 'Sid Kiblawi', 'Ayush Singh', 'Minh Chien Vu', 'Trishala Neeraj', 'Jonas Golde', 'Albert Villanova del Moral', 'Benjamin Beilharz']",http://arxiv.org/pdf/2206.15076v1.pdf,2022-06-30,," Training and evaluating language models increasingly requires theconstruction of meta-datasets --diverse collections of curated data with clearprovenance. Natural language prompting has recently lead to improved zero-shotgeneralization by transforming existing, supervised datasets into a diversityof novel pretraining tasks, highlighting the benefits of meta-dataset curation.While successful in general-domain text, translating these data-centricapproaches to biomedical language modeling remains challenging, as labeledbiomedical datasets are significantly underrepresented in popular data hubs. Toaddress this challenge, we introduce BigBIO a community library of 126+biomedical NLP datasets, currently covering 12 task categories and 10+languages. BigBIO facilitates reproducible meta-dataset curation viaprogrammatic access to datasets and their metadata, and is compatible withcurrent platforms for prompt engineering and end-to-end few/zero shot languagemodel evaluation. We discuss our process for task schema harmonization, dataauditing, contribution guidelines, and outline two illustrative use cases:zero-shot evaluation of biomedical prompts and large-scale, multi-tasklearning. BigBIO is an ongoing community effort and is available athttps://github.com/bigscience-workshop/biomedical",,arXiv,['cs.cl'],, "a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity","['Yejin Bang', 'Samuel Cahyawijaya', 'Nayeon Lee', 'Wenliang Dai', 'Dan Su', 'Bryan Wilie', 'Holy Lovenia', 'Ziwei Ji', 'Tiezheng Yu', 'Willy Chung', 'Quyet V. Do', 'Yan Xu', 'Pascale Fung']",http://arxiv.org/pdf/2302.04023v4.pdf,2023-02-08,," This paper proposes a framework for quantitatively evaluating interactiveLLMs such as ChatGPT using publicly available data sets. We carry out anextensive technical evaluation of ChatGPT using 23 data sets covering 8different common NLP application tasks. We evaluate the multitask, multilingualand multi-modal aspects of ChatGPT based on these data sets and a newlydesigned multimodal dataset. We find that ChatGPT outperforms LLMs withzero-shot learning on most tasks and even outperforms fine-tuned models on sometasks. We find that it is better at understanding non-Latin script languagesthan generating them. It is able to generate multimodal content from textualprompts, via an intermediate code generation step. Moreover, we find thatChatGPT is 63.41% accurate on average in 10 different reasoning categoriesunder logical reasoning, non-textual reasoning, and commonsense reasoning,hence making it an unreliable reasoner. It is, for example, better at deductivethan inductive reasoning. ChatGPT suffers from hallucination problems likeother LLMs and it generates more extrinsic hallucinations from its parametricmemory as it does not have access to an external knowledge base. Finally, theinteractive feature of ChatGPT enables human collaboration with the underlyingLLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++on machine translation, in a multi-turn ""prompt engineering"" fashion. We alsorelease codebase for evaluation set extraction.",,arXiv,"['cs.cl', 'cs.ai']",, evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery,"['Debadutta Dash', 'Rahul Thapa', 'Juan M. Banda', 'Akshay Swaminathan', 'Morgan Cheatham', 'Mehr Kashyap', 'Nikesh Kotecha', 'Jonathan H. Chen', 'Saurabh Gombar', 'Lance Downing', 'Rachel Pedreira', 'Ethan Goh', 'Angel Arnaout', 'Garret Kenn Morris', 'Honor Magon', 'Matthew P Lungren', 'Eric Horvitz', 'Nigam H. Shah']",http://arxiv.org/pdf/2304.13714v3.pdf,2023-04-26,," Despite growing interest in using large language models (LLMs) in healthcare,current explorations do not assess the real-world utility and safety of LLMs inclinical settings. Our objective was to determine whether two LLMs can serveinformation needs submitted by physicians as questions to an informaticsconsultation service in a safe and concordant manner. Sixty six questions froman informatics consult service were submitted to GPT-3.5 and GPT-4 via simpleprompts. 12 physicians assessed the LLM responses' possibility of patient harmand concordance with existing reports from an informatics consultation service.Physician assessments were summarized based on majority vote. For no questionsdid a majority of physicians deem either LLM response as harmful. For GPT-3.5,responses to 8 questions were concordant with the informatics consult report,20 discordant, and 9 were unable to be assessed. There were 29 responses withno majority on ""Agree"", ""Disagree"", and ""Unable to assess"". For GPT-4,responses to 13 questions were concordant, 15 discordant, and 3 were unable tobe assessed. There were 35 responses with no majority. Responses from both LLMswere largely devoid of overt harm, but less than 20% of the responses agreedwith an answer from an informatics consultation service, responses containedhallucinated references, and physicians were divided on what constitutes harm.These results suggest that while general purpose LLMs are able to provide safeand credible responses, they often do not meet the specific information need ofa given question. A definitive evaluation of the usefulness of LLMs inhealthcare settings will likely require additional research on promptengineering, calibration, and custom-tailoring of general purpose models.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ir']",, zelda video analytics using visionlanguage models,"['Francisco Romero', 'Caleb Winston', 'Johann Hauswald', 'Matei Zaharia', 'Christos Kozyrakis']",http://arxiv.org/pdf/2305.03785v2.pdf,2023-05-05,," Advances in ML have motivated the design of video analytics systems thatallow for structured queries over video datasets. However, existing systemslimit query expressivity, require users to specify an ML model per predicate,rely on complex optimizations that trade off accuracy for performance, andreturn large amounts of redundant and low-quality results. This paper focuseson the recently developed Vision-Language Models (VLMs) that allow users toquery images using natural language like ""cars during daytime at trafficintersections."" Through an in-depth analysis, we show VLMs address threelimitations of current video analytics systems: general expressivity, a singlegeneral purpose model to query many predicates, and are both simple and fast.However, VLMs still return large numbers of redundant and low-quality resultsthat can overwhelm and burden users. In addition, VLMs often require manualprompt engineering to improve result relevance. We present Zelda: a video analytics system that uses VLMs to return bothrelevant and semantically diverse results for top-K queries on large videodatasets. Zelda prompts the VLM with the user's query in natural language.Zelda then automatically adds discriminator and synonym terms to boostaccuracy, and terms to identify low-quality frames. To improve resultdiversity, Zelda uses semantic-rich VLM embeddings in an algorithm that prunessimilar frames while considering their relevance to the query and the number oftop-K results requested. We evaluate Zelda across five datasets and 19 queriesand quantitatively show it achieves higher mean average precision (up to 1.15x)and improves average pairwise similarity (up to 1.16x) compared to using VLMsout-of-the-box. We also compare Zelda to a state-of-the-art video analyticsengine and show that Zelda retrieves results 7.5x (up to 10.4x) faster for thesame accuracy and frame diversity.",,arXiv,['cs.db'],, chatgpt chemistry assistant for text mining and prediction of mof synthesis,"['Zhiling Zheng', 'Oufan Zhang', 'Christian Borgs', 'Jennifer T. Chayes', 'Omar M. Yaghi']",http://arxiv.org/pdf/2306.11296v2.pdf,2023-06-20,," We use prompt engineering to guide ChatGPT in the automation of text miningof metal-organic frameworks (MOFs) synthesis conditions from diverse formatsand styles of the scientific literature. This effectively mitigates ChatGPT'stendency to hallucinate information -- an issue that previously made the use ofLarge Language Models (LLMs) in scientific fields challenging. Our approachinvolves the development of a workflow implementing three different processesfor text mining, programmed by ChatGPT itself. All of them enable parsing,searching, filtering, classification, summarization, and data unification withdifferent tradeoffs between labor, speed, and accuracy. We deploy this systemto extract 26,257 distinct synthesis parameters pertaining to approximately 800MOFs sourced from peer-reviewed research articles. This process incorporatesour ChemPrompt Engineering strategy to instruct ChatGPT in text mining,resulting in impressive precision, recall, and F1 scores of 90-99%.Furthermore, with the dataset built by text mining, we constructed amachine-learning model with over 86% accuracy in predicting MOF experimentalcrystallization outcomes and preliminarily identifying important factors in MOFcrystallization. We also developed a reliable data-grounded MOF chatbot toanswer questions on chemical reactions and synthesis procedures. Given that theprocess of using ChatGPT reliably mines and tabulates diverse MOF synthesisinformation in a unified format, while using only narrative language requiringno coding expertise, we anticipate that our ChatGPT Chemistry Assistant will bevery useful across various other chemistry sub-disciplines.",,arXiv,"['cs.ir', 'cond-mat.mtrl-sci', 'cs.cl', 'physics.chem-ph']",, identifying and extracting rare disease phenotypes with large language models,"['Cathy Shyr', 'Yan Hu', 'Paul A. Harris', 'Hua Xu']",http://arxiv.org/pdf/2306.12656v1.pdf,2023-06-22,," Rare diseases (RDs) are collectively common and affect 300 million peopleworldwide. Accurate phenotyping is critical for informing diagnosis andtreatment, but RD phenotypes are often embedded in unstructured text andtime-consuming to extract manually. While natural language processing (NLP)models can perform named entity recognition (NER) to automate extraction, amajor bottleneck is the development of a large, annotated corpus for modeltraining. Recently, prompt learning emerged as an NLP paradigm that can lead tomore generalizable results without any (zero-shot) or few labeled samples(few-shot). Despite growing interest in ChatGPT, a revolutionary large languagemodel capable of following complex human prompts and generating high-qualityresponses, none have studied its NER performance for RDs in the zero- andfew-shot settings. To this end, we engineered novel prompts aimed at extractingRD phenotypes and, to the best of our knowledge, are the first the establish abenchmark for evaluating ChatGPT's performance in these settings. We comparedits performance to the traditional fine-tuning approach and conducted anin-depth error analysis. Overall, fine-tuning BioClinicalBERT resulted inhigher performance (F1 of 0.689) than ChatGPT (F1 of 0.472 and 0.591 in thezero- and few-shot settings, respectively). Despite this, ChatGPT achievedsimilar or higher accuracy for certain entities (i.e., rare diseases and signs)in the one-shot setting (F1 of 0.776 and 0.725). This suggests that withappropriate prompt engineering, ChatGPT has the potential to match oroutperform fine-tuned language models for certain entity types with just onelabeled sample. While the proliferation of large language models may provideopportunities for supporting RD diagnosis and treatment, researchers andclinicians should critically evaluate model outputs and be well-informed oftheir limitations.",,arXiv,"['cs.cl', 'cs.ai']",, go beyond the obvious probing the gap of informal reasoning ability between humanity and llms by detective reasoning puzzle benchmark,"['Zhouhon Gu', 'Zihan Li', 'Lin Zhang', 'Zhuozhi Xiong', 'Haoning Ye', 'Yikai Zhang', 'Wenhao Huang', 'Xiaoxuan Zhu', 'Qianyu He', 'Rui Xu', 'Sihang Jiang', 'Shusen Wang', 'Zili Wang', 'Hongwei Feng', 'Zhixu Li', 'Yanghua Xiao']",http://arxiv.org/pdf/2307.05113v2.pdf,2023-07-11,," Informal reasoning ability is the ability to reason based on common sense,experience, and intuition.Humans use informal reasoning every day to extractthe most influential elements for their decision-making from a large amount oflife-like information.With the rapid development of language models, therealization of general artificial intelligence has emerged with hope. Given theoutstanding informal reasoning ability of humans, how much informal reasoningability language models have has not been well studied by scholars.In order toexplore the gap between humans and language models in informal reasoningability, this paper constructs a Detective Reasoning Benchmark, which is anassembly of 1,200 questions gathered from accessible online resources, aims atevaluating the model's informal reasoning ability in real-lifecontext.Considering the improvement of the model's informal reasoning abilityrestricted by the lack of benchmark, we further propose a Self-Question PromptFramework that mimics human thinking to enhance the model's informal reasoningability.The goals of self-question are to find key elements, deeply investigatethe connections between these elements, encourage the relationship between eachelement and the problem, and finally, require the model to reasonably answerthe problem.The experimental results show that human performance greatlyoutperforms the SoTA Language Models in Detective Reasoning Benchmark.Besides,Self-Question is proven to be the most effective prompt engineering inimproving GPT-4's informal reasoning ability, but it still does not evensurpass the lowest score made by human participants.Upon acceptance of thepaper, the source code for the benchmark will be made publicly accessible.",,arXiv,['cs.cl'],, "ai foundation models for weather and climate applications, design, and implementation","['S. Karthik Mukkavilli', 'Daniel Salles Civitarese', 'Johannes Schmude', 'Johannes Jakubik', 'Anne Jones', 'Nam Nguyen', 'Christopher Phillips', 'Sujit Roy', 'Shraddha Singh', 'Campbell Watson', 'Raghu Ganti', 'Hendrik Hamann', 'Udaysankar Nair', 'Rahul Ramachandran', 'Kommy Weldemariam']",http://arxiv.org/pdf/2309.10808v2.pdf,2023-09-19,," Machine learning and deep learning methods have been widely explored inunderstanding the chaotic behavior of the atmosphere and furthering weatherforecasting. There has been increasing interest from technology companies,government institutions, and meteorological agencies in building digital twinsof the Earth. Recent approaches using transformers, physics-informed machinelearning, and graph neural networks have demonstrated state-of-the-artperformance on relatively narrow spatiotemporal scales and specific tasks. Withthe recent success of generative artificial intelligence (AI) using pre-trainedtransformers for language modeling and vision with prompt engineering andfine-tuning, we are now moving towards generalizable AI. In particular, we arewitnessing the rise of AI foundation models that can perform competitively onmultiple domain-specific downstream tasks. Despite this progress, we are stillin the nascent stages of a generalizable AI model for global Earth systemmodels, regional climate models, and mesoscale weather models. Here, we reviewcurrent state-of-the-art AI approaches, primarily from transformer and operatorlearning literature in the context of meteorology. We provide our perspectiveon criteria for success towards a family of foundation models for nowcastingand forecasting weather and climate predictions. We also discuss how suchmodels can perform competitively on downstream tasks such as downscaling(super-resolution), identifying conditions conducive to the occurrence ofwildfires, and predicting consequential meteorological phenomena across variousspatiotemporal scales such as hurricanes and atmospheric rivers. In particular,we examine current AI methodologies and contend they have matured enough todesign and implement a weather foundation model.",,arXiv,"['cs.lg', 'cs.ai', 'physics.ao-ph', '68t07 (primary), 68t01, 86a08', 'i.2.0; i.4.0; j.2.5']",, promptor a conversational and autonomous prompt generation agent for intelligent text entry techniques,"['Junxiao Shen', 'John J. Dudley', 'Jingyao Zheng', 'Bill Byrne', 'Per Ola Kristensson']",http://arxiv.org/pdf/2310.08101v2.pdf,2023-10-12,," Text entry is an essential task in our day-to-day digital interactions.Numerous intelligent features have been developed to streamline this process,making text entry more effective, efficient, and fluid. These improvementsinclude sentence prediction and user personalization. However, as deeplearning-based language models become the norm for these advanced features, thenecessity for data collection and model fine-tuning increases. These challengescan be mitigated by harnessing the in-context learning capability of largelanguage models such as GPT-3.5. This unique feature allows the language modelto acquire new skills through prompts, eliminating the need for data collectionand fine-tuning. Consequently, large language models can learn various textprediction techniques. We initially showed that, for a sentence predictiontask, merely prompting GPT-3.5 surpassed a GPT-2 backed system and iscomparable with a fine-tuned GPT-3.5 model, with the latter two methodsrequiring costly data collection, fine-tuning and post-processing. However, thetask of prompting large language models to specialize in specific textprediction tasks can be challenging, particularly for designers withoutexpertise in prompt engineering. To address this, we introduce Promptor, aconversational prompt generation agent designed to engage proactively withdesigners. Promptor can automatically generate complex prompts tailored to meetspecific needs, thus offering a solution to this challenge. We conducted a userstudy involving 24 participants creating prompts for three intelligent textentry tasks, half of the participants used Promptor while the other halfdesigned prompts themselves. The results show that Promptor-designed promptsresult in a 35% increase in similarity and 22% in coherence over those bydesigners.",,arXiv,"['cs.cl', 'cs.ai']",, constitutionmaker interactively critiquing large language models by converting feedback into principles,"['Savvas Petridis', 'Ben Wedin', 'James Wexler', 'Aaron Donsbach', 'Mahima Pushkarna', 'Nitesh Goyal', 'Carrie J. Cai', 'Michael Terry']",http://arxiv.org/pdf/2310.15428v1.pdf,2023-10-24,," Large language model (LLM) prompting is a promising new approach for users tocreate and customize their own chatbots. However, current methods for steeringa chatbot's outputs, such as prompt engineering and fine-tuning, do not supportusers in converting their natural feedback on the model's outputs to changes inthe prompt or model. In this work, we explore how to enable users tointeractively refine model outputs through their feedback, by helping themconvert their feedback into a set of principles (i.e. a constitution) thatdictate the model's behavior. From a formative study, we (1) found that usersneeded support converting their feedback into principles for the chatbot and(2) classified the different principle types desired by users. Inspired bythese findings, we developed ConstitutionMaker, an interactive tool forconverting user feedback into principles, to steer LLM-based chatbots. WithConstitutionMaker, users can provide either positive or negative feedback innatural language, select auto-generated feedback, or rewrite the chatbot'sresponse; each mode of feedback automatically generates a principle that isinserted into the chatbot's prompt. In a user study with 14 participants, wecompare ConstitutionMaker to an ablated version, where users write their ownprinciples. With ConstitutionMaker, participants felt that their principlescould better guide the chatbot, that they could more easily convert theirfeedback into principles, and that they could write principles moreefficiently, with less mental demand. ConstitutionMaker helped users identifyways to improve the chatbot, formulate their intuitive responses to the modelinto feedback, and convert this feedback into specific and clear principles.Together, these findings inform future tools that support the interactivecritiquing of LLM outputs.",,arXiv,"['cs.hc', 'cs.ai']",, fewshot learning for sentence pair classification and its applications in software engineering,"['Robert Kraig Helmeczi', 'Mucahit Cevik', 'Savas Yıldırım']",http://arxiv.org/pdf/2306.08058v1.pdf,2023-06-13,," Few-shot learning-the ability to train models with access to limited data-hasbecome increasingly popular in the natural language processing (NLP) domain, aslarge language models such as GPT and T0 have been empirically shown to achievehigh performance in numerous tasks with access to just a handful of labeledexamples. Smaller language models such as BERT and its variants have also beenshown to achieve strong performance with just a handful of labeled exampleswhen combined with few-shot learning algorithms like pattern-exploitingtraining (PET) and SetFit. The focus of this work is to investigate theperformance of alternative few-shot learning approaches with BERT-based models.Specifically, vanilla fine-tuning, PET and SetFit are compared for numerousBERT-based checkpoints over an array of training set sizes. To facilitate thisinvestigation, applications of few-shot learning are considered in softwareengineering. For each task, high-performance techniques and their associatedmodel checkpoints are identified through detailed empirical analysis. Ourresults establish PET as a strong few-shot learning approach, and our analysisshows that with just a few hundred labeled examples it can achieve performancenear that of fine-tuning on full-sized data sets.",,arXiv,['cs.se'],, fewclue a chinese fewshot learning evaluation benchmark,"['Liang Xu', 'Xiaojing Lu', 'Chenyang Yuan', 'Xuanwei Zhang', 'Huilin Xu', 'Hu Yuan', 'Guoao Wei', 'Xiang Pan', 'Xin Tian', 'Libo Qin', 'Hu Hai']",http://arxiv.org/pdf/2107.07498v2.pdf,2021-07-15,," Pretrained Language Models (PLMs) have achieved tremendous success in naturallanguage understanding tasks. While different learning schemes -- fine-tuning,zero-shot, and few-shot learning -- have been widely explored and compared forlanguages such as English, there is comparatively little work in Chinese tofairly and comprehensively evaluate and compare these methods and thus hinderscumulative progress. In this paper, we introduce the Chinese Few-shot LearningEvaluation Benchmark (FewCLUE), the first comprehensive few-shot evaluationbenchmark in Chinese. It includes nine tasks, ranging from single-sentence andsentence-pair classification tasks to machine reading comprehension tasks. Wesystematically evaluate five state-of-the-art (SOTA) few-shot learning methods(including PET, ADAPET, LM-BFF, P-tuning and EFL), and compare theirperformance with fine-tuning and zero-shot learning schemes on the newlyconstructed FewCLUE benchmark. Experimental results reveal that: 1) The effectof different few-shot learning methods is sensitive to the pre-trained model towhich the methods are applied; 2) PET and P-tuning achieve the best overallperformance with RoBERTa and ERNIE respectively. Our benchmark is used in thefew-shot learning contest of NLPCC 2021. In addition, we provide auser-friendly toolkit, as well as an online leaderboard to help facilitatefurther progress on Chinese few-shot learning. We provide a baselineperformance on different learning methods, a reference for future research.",,arXiv,"['cs.cl', 'cs.ai']",, true fewshot learning with prompts a realworld perspective,"['Timo Schick', 'Hinrich Schütze']",http://arxiv.org/pdf/2111.13440v1.pdf,2021-11-26,," Prompt-based approaches are strong at few-shot learning. However, Perez etal. (2021) have recently cast doubt on their performance because they haddifficulty getting good results in a ""true"" few-shot setting in which promptsand hyperparameters cannot be tuned on a dev set. In view of this, we conductan extensive study of PET, a method that combines textual instructions withexample-based finetuning. We show that, if correctly configured, PET performsstrongly in a true few-shot setting, i.e., without a dev set. Crucial for thisstrong performance is PET's ability to intelligently handle multiple prompts.We then put our findings to a real-world test by running PET on RAFT, abenchmark of tasks taken directly from realistic NLP applications for which nolabeled dev or test sets are available. PET achieves a new state of the art onRAFT and performs close to non-expert humans for 7 out of 11 tasks. Theseresults demonstrate that prompt-based learners like PET excel at true few-shotlearning and underpin our belief that learning from instructions will play animportant role on the path towards human-like few-shot learning capabilities.",,arXiv,['cs.cl'],, prompting electra fewshot learning with discriminative pretrained models,"['Mengzhou Xia', 'Mikel Artetxe', 'Jingfei Du', 'Danqi Chen', 'Ves Stoyanov']",http://arxiv.org/pdf/2205.15223v3.pdf,2022-05-30,," Pre-trained masked language models successfully perform few-shot learning byformulating downstream tasks as text infilling. However, as a strongalternative in full-shot settings, discriminative pre-trained models likeELECTRA do not fit into the paradigm. In this work, we adapt prompt-basedfew-shot learning to ELECTRA and show that it outperforms masked languagemodels in a wide range of tasks. ELECTRA is pre-trained to distinguish if atoken is generated or original. We naturally extend that to prompt-basedfew-shot learning by training to score the originality of the target optionswithout introducing new parameters. Our method can be easily adapted to tasksinvolving multi-token predictions without extra computation overhead. Analysisshows that ELECTRA learns distributions that align better with downstreamtasks.",,arXiv,"['cs.cl', 'cs.lg']",, reordering examples helps during primingbased fewshot learning,"['Sawan Kumar', 'Partha Talukdar']",http://arxiv.org/pdf/2106.01751v1.pdf,2021-06-03,," The ability to learn from limited data, or few-shot learning, is a desirableand often critical requirement for NLP systems. While many existing methods dopoorly at learning from a handful of examples, large pretrained language modelshave recently been shown to be efficient few-shot learners. One approach tofew-shot learning, which does not require finetuning of model parameters, is toaugment the language model's input with priming text which is typicallyconstructed using task specific descriptions and examples. In this work, wefurther explore priming-based few-shot learning, with focus on using examplesas prompts. We show that presenting examples in the right order is key forgeneralization. We introduce PERO (Prompting with Examples in the Right Order),where we formulate few-shot learning as search over the set of permutations ofthe training examples. We show that PERO can learn to generalize efficientlyusing as few as 10 examples, in contrast to existing approaches. While thenewline token is a natural choice for separating the examples in the prompt, weshow that learning a new separator token can potentially provide further gainsin performance. We demonstrate the effectiveness of the proposed method on thetasks of sentiment classification, natural language inference and factretrieval. Finally, we analyze the learned prompts to reveal novel insights,including the idea that two training examples in the right order alone canprovide competitive performance for sentiment classification and naturallanguage inference.",,arXiv,['cs.cl'],, tuning language models as training data generators for augmentationenhanced fewshot learning,"['Yu Meng', 'Martin Michalski', 'Jiaxin Huang', 'Yu Zhang', 'Tarek Abdelzaher', 'Jiawei Han']",http://arxiv.org/pdf/2211.03044v2.pdf,2022-11-06,," Recent studies have revealed the intriguing few-shot learning ability ofpretrained language models (PLMs): They can quickly adapt to a new task whenfine-tuned on a small amount of labeled data formulated as prompts, withoutrequiring abundant task-specific annotations. Despite their promisingperformance, most existing few-shot approaches that only learn from the smalltraining set still underperform fully supervised training by nontrivialmargins. In this work, we study few-shot learning with PLMs from a differentperspective: We first tune an autoregressive PLM on the few-shot samples andthen use it as a generator to synthesize a large amount of novel trainingsamples which augment the original training set. To encourage the generator toproduce label-discriminative samples, we train it via weighted maximumlikelihood where the weight of each token is automatically adjusted based on adiscriminative meta-learning objective. A classification PLM can then befine-tuned on both the few-shot and the synthetic samples with regularizationfor better generalization and stability. Our approach FewGen achieves anoverall better result across seven classification tasks of the GLUE benchmarkthan existing few-shot learning methods, improving no-augmentation methods by5+ average points, and outperforming augmentation methods by 3+ average points.",,arXiv,"['cs.cl', 'cs.lg']",, cins comprehensive instruction for fewshot learning in taskoriented dialog systems,"['Fei Mi', 'Yitong Li', 'Yasheng Wang', 'Xin Jiang', 'Qun Liu']",http://arxiv.org/pdf/2109.04645v4.pdf,2021-09-10,," As labeling cost for different modules in task-oriented dialog (ToD) systemsis high, a major challenge in practice is to learn different tasks with theleast amount of labeled data. Recently, prompting methods over pre-trainedlanguage models (PLMs) have shown promising results for few-shot learning inToD. To better utilize the power of PLMs, this paper proposes ComprehensiveInstruction (CINS) that exploits PLMs with extra task-specific instructions. Wedesign a schema (definition, constraint, prompt) of instructions and theircustomized realizations for three important downstream tasks in ToD, i.e.intent classification, dialog state tracking, and natural language generation.A sequence-to-sequence model (T5) is adopted to solve these three tasks in aunified framework. Extensive experiments are conducted on these ToD tasks inrealistic few-shot learning scenarios with small validation data. Empiricalresults demonstrate that the proposed CINS approach consistently improvestechniques that finetune PLMs with raw input or short prompts.",,arXiv,"['cs.cl', 'cs.lg']",, exploring promptbased fewshot learning for grounded dialog generation,"['Chujie Zheng', 'Minlie Huang']",http://arxiv.org/pdf/2109.06513v2.pdf,2021-09-14,," Dialog models can be greatly strengthened through grounding on variousexternal information, but grounded dialog corpora are usually not naturallyaccessible. In this work, we focus on the few-shot learning for grounded dialoggeneration (GDG). We first propose a simple prompting method for GDG tasks,where different constructs of model input, such as the grounding source and theconversation context, are distinguished through continuous or discrete prompts.On three typical GDG tasks, we empirically demonstrate and analyze in-depth theeffectiveness of our method. We then conduct extensive experiments tothoroughly investigate how our prompting method works with differentpre-trained models. We show that prompted language models perform superiorly toconversational models, and further analyze various factors that influence theeffects of prompting. Overall, our work introduces a prompt-based perspectiveto the few-shot learning for GDG tasks, and provides valuable findings andinsights for future research.",,arXiv,['cs.cl'],, ontologyenhanced prompttuning for fewshot learning,"['Hongbin Ye', 'Ningyu Zhang', 'Shumin Deng', 'Xiang Chen', 'Hui Chen', 'Feiyu Xiong', 'Xi Chen', 'Huajun Chen']",http://arxiv.org/pdf/2201.11332v1.pdf,2022-01-27,," Few-shot Learning (FSL) is aimed to make predictions based on a limitednumber of samples. Structured data such as knowledge graphs and ontologylibraries has been leveraged to benefit the few-shot setting in various tasks.However, the priors adopted by the existing methods suffer from challengingknowledge missing, knowledge noise, and knowledge heterogeneity, which hinderthe performance for few-shot learning. In this study, we explore knowledgeinjection for FSL with pre-trained language models and proposeontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop theontology transformation based on the external knowledge graph to address theknowledge missing issue, which fulfills and converts structure knowledge totext. We further introduce span-sensitive knowledge injection via a visiblematrix to select informative knowledge to handle the knowledge noise issue. Tobridge the gap between knowledge and text, we propose a collective trainingalgorithm to optimize representations jointly. We evaluate our proposedOntoPrompt in three tasks, including relation extraction, event extraction, andknowledge graph completion, with eight datasets. Experimental resultsdemonstrate that our approach can obtain better few-shot performance thanbaselines.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, impossible triangle what's next for pretrained language models,"['Chenguang Zhu', 'Michael Zeng']",http://arxiv.org/pdf/2204.06130v2.pdf,2022-04-13,," Recent development of large-scale pre-trained language models (PLM) havesignificantly improved the capability of models in various NLP tasks, in termsof performance after task-specific fine-tuning and zero-shot / few-shotlearning. However, many of such models come with a dauntingly huge size thatfew institutions can afford to pre-train, fine-tune or even deploy, whilemoderate-sized models usually lack strong generalized few-shot learningcapabilities. In this paper, we first elaborate the current obstacles of usingPLM models in terms of the Impossible Triangle: 1) moderate model size, 2)state-of-the-art few-shot learning capability, and 3) state-of-the-artfine-tuning capability. We argue that all existing PLM models lack one or moreproperties from the Impossible Triangle. To remedy these missing properties ofPLMs, various techniques have been proposed, such as knowledge distillation,data augmentation and prompt learning, which inevitably brings additional workto the application of PLMs in real scenarios. We then offer insights intofuture research directions of PLMs to achieve the Impossible Triangle, andbreak down the task into several key phases.",,arXiv,['cs.cl'],, how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models,"['Hai Dang', 'Lukas Mecke', 'Florian Lehmann', 'Sven Goller', 'Daniel Buschek']",http://arxiv.org/pdf/2209.01390v1.pdf,2022-09-03,," Deep generative models have the potential to fundamentally change the way wecreate high-fidelity digital content but are often hard to control. Prompting agenerative model is a promising recent development that in principle enablesend-users to creatively leverage zero-shot and few-shot learning to assign newtasks to an AI ad-hoc, simply by writing them down. However, for the majorityof end-users writing effective prompts is currently largely a trial and errorprocess. To address this, we discuss the key opportunities and challenges forinteractive creative applications that use prompting as a new paradigm forHuman-AI interaction. Based on our analysis, we propose four design goals foruser interfaces that support prompting. We illustrate these with concrete UIdesign sketches, focusing on the use case of creative writing. The researchcommunity in HCI and AI can take these as starting points to develop adequateuser interfaces for models capable of zero- and few-shot learning.",,arXiv,"['cs.hc', 'cs.cl', 'h.5.2; i.2.7']",, differentiable entailment for parameter efficient few shot learning,"['Ethan Kim', 'Jerry Yang']",http://arxiv.org/pdf/2301.13345v1.pdf,2023-01-31,," Few-shot learning allows pre-trained language models to adapt to downstreamtasks while using a limited number of training examples. However, practicalapplications are limited when all model parameters must be optimized. In thiswork we apply a new technique for parameter efficient few shot learning whileadopting a strict definition of parameter efficiency. Our training methodcombines 1) intermediate training by reformulating natural language tasks asentailment tasks \cite{wang_entailment_2021} and 2) differentiable optimizationof template and label tokens \cite{zhang_differentiable_2021}. We quantify thetradeoff between parameter efficiency and performance in the few-shot regimeand propose a simple model agnostic approach that can be extended to any taskBy achieving competitive performance while only optimizing 3\% of a model'sparameters and allowing for batched inference, we allow for more efficientpractical deployment of models.",,arXiv,['cs.cl'],, "multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence","['Markus Bayer', 'Tobias Frey', 'Christian Reuter']",http://arxiv.org/pdf/2207.11076v1.pdf,2022-07-22,," Gathering cyber threat intelligence from open sources is becomingincreasingly important for maintaining and achieving a high level of securityas systems become larger and more complex. However, these open sources areoften subject to information overload. It is therefore useful to apply machinelearning models that condense the amount of information to what is necessary.Yet, previous studies and applications have shown that existing classifiers arenot able to extract specific information about emerging cybersecurity eventsdue to their low generalization ability. Therefore, we propose a system toovercome this problem by training a new classifier for each new incident. Sincethis requires a lot of labelled data using standard training methods, wecombine three different low-data regime techniques - transfer learning, dataaugmentation, and few-shot learning - to train a high-quality classifier fromvery few labelled instances. We evaluated our approach using a novel datasetderived from the Microsoft Exchange Server data breach of 2021 which waslabelled by three experts. Our findings reveal an increase in F1 score of morethan 21 points compared to standard training methods and more than 18 pointscompared to a state-of-the-art method in few-shot learning. Furthermore, theclassifier trained with this method and 32 instances is only less than 5 F1score points worse than a classifier trained with 1800 instances.",,arXiv,"['cs.cr', 'cs.cl']",, multitask pretraining of modular prompt for chinese fewshot learning,"['Tianxiang Sun', 'Zhengfu He', 'Qin Zhu', 'Xipeng Qiu', 'Xuanjing Huang']",http://arxiv.org/pdf/2210.07565v3.pdf,2022-10-14,," Prompt tuning is a parameter-efficient approach to adapting pre-trainedlanguage models to downstream tasks. Although prompt tuning has been shown tomatch the performance of full model tuning when training data is sufficient, ittends to struggle in few-shot learning settings. In this paper, we presentMulti-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shotlearning. MP2 is a set of combinable prompts pre-trained on 38 Chinese tasks.On downstream tasks, the pre-trained prompts are selectively activated andcombined, leading to strong compositional generalization to unseen tasks. Tobridge the gap between pre-training and fine-tuning, we formulate upstream anddownstream tasks into a unified machine reading comprehension task. Extensiveexperiments under two learning paradigms, i.e., gradient descent and black-boxtuning, show that MP2 significantly outperforms prompt tuning, full modeltuning, and prior prompt pre-training methods in few-shot settings. Inaddition, we demonstrate that MP2 can achieve surprisingly fast and strongadaptation to downstream tasks by merely learning 8 parameters to combine thepre-trained modular prompts.",,arXiv,['cs.cl'],, fewshot bot promptbased learning for dialogue systems,"['Andrea Madotto', 'Zhaojiang Lin', 'Genta Indra Winata', 'Pascale Fung']",http://arxiv.org/pdf/2110.08118v1.pdf,2021-10-15,," Learning to converse using only a few examples is a great challenge inconversational AI. The current best conversational models, which are eithergood chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL),are language models (LMs) fine-tuned on large conversational datasets. Trainingthese models is expensive, both in terms of computational resources and time,and it is hard to keep them up to date with new conversational skills. A simpleyet unexplored solution is prompt-based few-shot learning (Brown et al. 2020)which does not require gradient-based fine-tuning but instead uses a fewexamples in the LM context as the only source of learning. In this paper, weexplore prompt-based few-shot learning in dialogue tasks. We benchmark LMs ofdifferent sizes in nine response generation tasks, which include fourknowledge-grounded tasks, a task-oriented generations task, three open-chattasks, and controlled stylistic generation, and five conversational parsingtasks, which include dialogue state tracking, graph path generation, personainformation extraction, document retrieval, and internet query generation. Thecurrent largest released LM (GPT-J-6B) using prompt-based few-shot learning,and thus requiring no training, achieves competitive performance to fullytrained state-of-the-art models. Moreover, we propose a novel prompt-basedfew-shot classifier, that also does not require any fine-tuning, to select themost appropriate prompt given a dialogue history. Finally, by combining thepower of prompt-based few-shot learning and a Skill Selector, we create anend-to-end chatbot named the Few-Shot Bot (FSB), which automatically selectsthe most appropriate conversational skill, queries different knowledge bases orthe internet, and uses the retrieved knowledge to generate a human-likeresponse, all using only few dialogue examples per skill.",,arXiv,"['cs.cl', 'cs.ai']",, "a neural network solves, explains, and generates university math problems by program synthesis and fewshot learning at human level","['Iddo Drori', 'Sarah Zhang', 'Reece Shuttleworth', 'Leonard Tang', 'Albert Lu', 'Elizabeth Ke', 'Kevin Liu', 'Linda Chen', 'Sunny Tran', 'Newman Cheng', 'Roman Wang', 'Nikhil Singh', 'Taylor L. Patti', 'Jayson Lynch', 'Avi Shporer', 'Nakul Verma', 'Eugene Wu', 'Gilbert Strang']",http://arxiv.org/pdf/2112.15594v4.pdf,2021-12-31,," We demonstrate that a neural network pre-trained on text and fine-tuned oncode solves mathematics course problems, explains solutions, and generates newquestions at a human level. We automatically synthesize programs using few-shotlearning and OpenAI's Codex transformer and execute them to solve courseproblems at 81% automatic accuracy. We curate a new dataset of questions fromMIT's largest mathematics courses (Single Variable and Multivariable Calculus,Differential Equations, Introduction to Probability and Statistics, LinearAlgebra, and Mathematics for Computer Science) and Columbia University'sComputational Linear Algebra. We solve questions from a MATH dataset (onPrealgebra, Algebra, Counting and Probability, Intermediate Algebra, NumberTheory, and Precalculus), the latest benchmark of advanced mathematics problemsdesigned to assess mathematical reasoning. We randomly sample questions andgenerate solutions with multiple modalities, including numbers, equations, andplots. The latest GPT-3 language model pre-trained on text automatically solvesonly 18.8% of these university questions using zero-shot learning and 30.8%using few-shot learning and the most recent chain of thought prompting. Incontrast, program synthesis with few-shot learning using Codex fine-tuned oncode generates programs that automatically solve 81% of these questions. Ourapproach improves the previous state-of-the-art automatic solution accuracy onthe benchmark topics from 8.8% to 81.1%. We perform a survey to evaluate thequality and difficulty of generated questions. This work is the first toautomatically solve university-level mathematics course questions at a humanlevel and the first work to explain and generate university-level mathematicscourse questions at scale, a milestone for higher education.",,arXiv,"['cs.lg', 'cs.ai']",, detecting hate speech with gpt3,"['Ke-Li Chiu', 'Annie Collins', 'Rohan Alexander']",http://arxiv.org/pdf/2103.12407v4.pdf,2021-03-23,," Sophisticated language models such as OpenAI's GPT-3 can generate hatefultext that targets marginalized groups. Given this capacity, we are interestedin whether large language models can be used to identify hate speech andclassify text as sexist or racist. We use GPT-3 to identify sexist and racisttext passages with zero-, one-, and few-shot learning. We find that with zero-and one-shot learning, GPT-3 can identify sexist or racist text with an averageaccuracy between 55 per cent and 67 per cent, depending on the category of textand type of learning. With few-shot learning, the model's accuracy can be ashigh as 85 per cent. Large language models have a role to play in hate speechdetection, and with further development they could eventually be used tocounter hate speech.",,arXiv,['cs.cl'],, true fewshot learning with language models,"['Ethan Perez', 'Douwe Kiela', 'Kyunghyun Cho']",http://arxiv.org/pdf/2105.11447v1.pdf,2021-05-24,," Pretrained language models (LMs) perform well on many tasks even whenlearning from a few examples, but prior work uses many held-out examples totune various aspects of learning, such as hyperparameters, training objectives,and natural language templates (""prompts""). Here, we evaluate the few-shotability of LMs when such held-out examples are unavailable, a setting we calltrue few-shot learning. We test two model selection criteria, cross-validationand minimum description length, for choosing LM prompts and hyperparameters inthe true few-shot setting. On average, both marginally outperform randomselection and greatly underperform selection based on held-out examples.Moreover, selection criteria often prefer models that perform significantlyworse than randomly-selected ones. We find similar results even when takinginto account our uncertainty in a model's true performance during selection, aswell as when varying the amount of computation and number of examples used forselection. Overall, our findings suggest that prior work significantlyoverestimated the true few-shot ability of LMs given the difficulty of few-shotmodel selection.",,arXiv,"['cs.cl', 'cs.lg', 'stat.ml']",, "generate, annotate, and learn nlp with synthetic text","['Xuanli He', 'Islam Nassar', 'Jamie Kiros', 'Gholamreza Haffari', 'Mohammad Norouzi']",http://arxiv.org/pdf/2106.06168v3.pdf,2021-06-11,," This paper studies the use of language models as a source of syntheticunlabeled text for NLP. We formulate a general framework called ``generate,annotate, and learn (GAL)'' to take advantage of synthetic text withinknowledge distillation, self-training, and few-shot learning applications. Togenerate high-quality task-specific text, we either fine-tune LMs on inputsfrom the task of interest, or prompt large LMs with few examples. We use thebest available classifier to annotate synthetic text with soft pseudo labelsfor knowledge distillation and self-training, and use LMs to obtain hard labelsfor few-shot learning. We train new supervised models on the combination oflabeled and pseudo-labeled data, which results in significant gains acrossseveral applications. We investigate key components of GAL and presenttheoretical and empirical arguments against the use of class-conditional LMs togenerate synthetic labeled text instead of unlabeled text. GAL achieves newstate-of-the-art knowledge distillation results for 6-layer transformers on theGLUE leaderboard.",,arXiv,['cs.lg'],, multimodal fewshot learning with frozen language models,"['Maria Tsimpoukelli', 'Jacob Menick', 'Serkan Cabi', 'S. M. Ali Eslami', 'Oriol Vinyals', 'Felix Hill']",http://arxiv.org/pdf/2106.13884v2.pdf,2021-06-25,," When trained at sufficient scale, auto-regressive language models exhibit thenotable ability to learn a new language task after being prompted with just afew examples. Here, we present a simple, yet effective, approach fortransferring this few-shot learning ability to a multimodal setting (vision andlanguage). Using aligned image and caption data, we train a vision encoder torepresent each image as a sequence of continuous embeddings, such that apre-trained, frozen language model prompted with this prefix generates theappropriate caption. The resulting system is a multimodal few-shot learner,with the surprising ability to learn a variety of new tasks when conditioned onexamples, represented as a sequence of multiple interleaved image and textembeddings. We demonstrate that it can rapidly learn words for new objects andnovel visual categories, do visual question-answering with only a handful ofexamples, and make use of outside knowledge, by measuring a single model on avariety of established and new benchmarks.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, incontext learning for fewshot dialogue state tracking,"['Yushi Hu', 'Chia-Hsuan Lee', 'Tianbao Xie', 'Tao Yu', 'Noah A. Smith', 'Mari Ostendorf']",http://arxiv.org/pdf/2203.08568v3.pdf,2022-03-16,," Collecting and annotating task-oriented dialogues is time-consuming andcostly; thus, zero and few shot learning could greatly benefit dialogue statetracking (DST). In this work, we propose an in-context learning (ICL) frameworkfor zero-shot and few-shot learning DST, where a large pre-trained languagemodel (LM) takes a test instance and a few exemplars as input, and directlydecodes the dialogue state without any parameter updates. To better leverage atabular domain description in the LM prompt, we reformulate DST into atext-to-SQL problem. We also propose a novel approach to retrieve annotateddialogues as exemplars. Empirical results on MultiWOZ show that our methodIC-DST substantially outperforms previous fine-tuned state-of-the-art models infew-shot settings. In addition, we test IC-DST in zero-shot settings, in whichthe model only takes a fixed task instruction as input, finding that itoutperforms previous zero-shot methods by a large margin.",,arXiv,['cs.cl'],, enabling classifiers to make judgements explicitly aligned with human values,"['Yejin Bang', 'Tiezheng Yu', 'Andrea Madotto', 'Zhaojiang Lin', 'Mona Diab', 'Pascale Fung']",http://arxiv.org/pdf/2210.07652v1.pdf,2022-10-14,," Many NLP classification tasks, such as sexism/racism detection or toxicitydetection, are based on human values. Yet, human values can vary under diversecultural conditions. Therefore, we introduce a framework for value-alignedclassification that performs prediction based on explicitly written humanvalues in the command. Along with the task, we propose a practical approachthat distills value-aligned knowledge from large-scale language models (LLMs)to construct value-aligned classifiers in two steps. First, we generatevalue-aligned training data from LLMs by prompt-based few-shot learning. Next,we fine-tune smaller classification models with the generated data for thetask. Empirical results show that our VA-Models surpass multiple baselines byat least 15.56% on the F1-score, including few-shot learning with OPT-175B andexisting text augmentation methods. We suggest that using classifiers withexplicit human value input improves both inclusivity & explainability in AI.",,arXiv,"['cs.cl', 'cs.ai']",, gps genetic prompt search for efficient fewshot learning,"['Hanwei Xu', 'Yujun Chen', 'Yulun Du', 'Nan Shao', 'Yanggang Wang', 'Haiyu Li', 'Zhilin Yang']",http://arxiv.org/pdf/2210.17041v1.pdf,2022-10-31,," Prompt-based techniques have demostrated great potential for improving thefew-shot generalization of pretrained language models. However, theirperformance heavily relies on the manual design of prompts and thus requires alot of human efforts. In this paper, we introduce Genetic Prompt Search (GPS)to improve few-shot learning with prompts, which utilizes a genetic algorithmto automatically search for high-performing prompts. GPS is gradient-free andrequires no update of model parameters but only a small validation set.Experiments on diverse datasets proved the effectiveness of GPS, whichoutperforms manual prompts by a large margin of 2.6 points. Our method is alsobetter than other parameter-efficient tuning methods such as prompt tuning.",,arXiv,['cs.cl'],, fewshot queryfocused summarization with prefixmerging,"['Ruifeng Yuan', 'Zili Wang', 'Ziqiang Cao', 'Wenjie Li']",http://arxiv.org/pdf/2211.16164v1.pdf,2022-11-29,," Query-focused summarization has been considered as an important extension fortext summarization. It aims to generate a concise highlight for a given query.Different from text summarization, query-focused summarization has long beenplagued by the problem of lacking high-quality large-scale datasets. In thispaper, we investigate the idea that whether we can integrate and transfer theknowledge of text summarization and question answering to assist the few-shotlearning in query-focused summarization. Here, we propose prefix-merging, aprefix-based pretraining strategy for few-shot learning in query-focusedsummarization. Drawn inspiration from prefix-tuning, we are allowed tointegrate the task knowledge from text summarization and question answeringinto a properly designed prefix and apply the merged prefix to query-focusedsummarization. With only a small amount of trainable parameters, prefix-mergingoutperforms fine-tuning on query-focused summarization. We further discuss theinfluence of different prefix designs and propose a visualized explanation forhow prefix-merging works.",,arXiv,"['cs.cl', 'cs.ai']",, log parsing with promptbased fewshot learning,"['Van-Hoang Le', 'Hongyu Zhang']",http://arxiv.org/pdf/2302.07435v1.pdf,2023-02-15,," Logs generated by large-scale software systems provide crucial informationfor engineers to understand the system status and diagnose problems of thesystems. Log parsing, which converts raw log messages into structured data, isthe first step to enabling automated log analytics. Existing log parsersextract the common part as log templates using statistical features. However,these log parsers often fail to identify the correct templates and parametersbecause: 1) they often overlook the semantic meaning of log messages, and 2)they require domain-specific knowledge for different log datasets. To addressthe limitations of existing methods, in this paper, we propose LogPPT tocapture the patterns of templates using prompt-based few-shot learning. LogPPTutilises a novel prompt tuning method to recognise keywords and parametersbased on a few labelled log data. In addition, an adaptive random samplingalgorithm is designed to select a small yet diverse training set. We haveconducted extensive experiments on 16 public log datasets. The experimentalresults show that LogPPT is effective and efficient for log parsing.",,arXiv,['cs.se'],, automated fewshot classification with instructionfinetuned language models,"['Rami Aly', 'Xingjian Shi', 'Kaixiang Lin', 'Aston Zhang', 'Andrew Gordon Wilson']",http://arxiv.org/pdf/2305.12576v2.pdf,2023-05-21,," A particularly successful class of approaches for few-shot learning combineslanguage models with prompts -- hand-crafted task descriptions that complementdata samples. However, designing prompts by hand for each task commonlyrequires domain knowledge and substantial guesswork. We observe, in the contextof classification tasks, that instruction finetuned language models exhibitremarkable prompt robustness, and we subsequently propose a simple method toeliminate the need for handcrafted prompts, named AuT-Few. This approachconsists of (i) a prompt retrieval module that selects suitable taskinstructions from the instruction-tuning knowledge base, and (ii) thegeneration of two distinct, semantically meaningful, class descriptions and aselection mechanism via cross-validation. Over $12$ datasets, spanning $8$classification tasks, we show that AuT-Few outperforms current state-of-the-artfew-shot learning methods. Moreover, AuT-Few is the best ranking method acrossdatasets on the RAFT few-shot benchmark. Notably, these results are achievedwithout task-specific handcrafted prompts on unseen tasks.",,arXiv,['cs.cl'],, evaluating the decency and consistency of data validation tests generated by llms,"['Rohan Alexander', 'Lindsay Katz', 'Callandra Moore', 'Zane Schwartz']",http://arxiv.org/pdf/2310.01402v1.pdf,2023-10-02,," We investigated the potential of large language models (LLMs) in developingdataset validation tests. We carried out 96 experiments each for both GPT-3.5and GPT-4, examining different prompt scenarios, learning modes, temperaturesettings, and roles. The prompt scenarios were: 1) Asking for expectations, 2)Asking for expectations with a given context, 3) Asking for expectations afterrequesting a simulation, and 4) Asking for expectations with a provided datasample. For learning modes, we tested: 1) zero-shot, 2) one-shot, and 3)few-shot learning. We also tested four temperature settings: 0, 0.4, 0.6, and1. Furthermore, two distinct roles were considered: 1) ""helpful assistant"", 2)""expert data scientist"". To gauge consistency, every setup was tested fivetimes. The LLM-generated responses were benchmarked against a gold standardsuite, created by an experienced data scientist knowledgeable about the data inquestion. We find there are considerable returns to the use of few-shotlearning, and that the more explicit the data setting can be the better. Thebest LLM configurations complement, rather than substitute, the gold standardresults. This study underscores the value LLMs can bring to the data cleaningand preparation stages of the data science workflow.",,arXiv,['stat.me'],, fewshot learning with multilingual language models,"['Xi Victoria Lin', 'Todor Mihaylov', 'Mikel Artetxe', 'Tianlu Wang', 'Shuohui Chen', 'Daniel Simig', 'Myle Ott', 'Naman Goyal', 'Shruti Bhosale', 'Jingfei Du', 'Ramakanth Pasunuru', 'Sam Shleifer', 'Punit Singh Koura', 'Vishrav Chaudhary', ""Brian O'Horo"", 'Jeff Wang', 'Luke Zettlemoyer', 'Zornitsa Kozareva', 'Mona Diab', 'Veselin Stoyanov', 'Xian Li']",http://arxiv.org/pdf/2112.10668v3.pdf,2021-12-20,," Large-scale generative language models such as GPT-3 are competitive few-shotlearners. While these models are known to be able to jointly represent manydifferent languages, their training data is dominated by English, potentiallylimiting their cross-lingual generalization. In this work, we trainmultilingual generative language models on a corpus covering a diverse set oflanguages, and study their few- and zero-shot learning capabilities in a widerange of tasks. Our largest model with 7.5 billion parameters sets new state ofthe art in few-shot learning in more than 20 representative languages,outperforming GPT-3 of comparable size in multilingual commonsense reasoning(with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in4-shot settings) and natural language inference (+5.4% in each of 0-shot and4-shot settings). On the FLORES-101 machine translation benchmark, our modeloutperforms GPT-3 on 171 out of 182 directions with 32 training examples, whilesurpassing the official supervised baseline in 45 directions. We conduct anin-depth analysis of different multilingual prompting approaches, showing inparticular that strong few-shot learning performance across languages can beachieved via cross-lingual transfer through both templates and demonstrationexamples. Finally, we evaluate our models in social value tasks such as hatespeech detection in five languages and find it has limitations similar tocomparable sized GPT-3 models.",,arXiv,"['cs.cl', 'cs.ai']",, flamingo a visual language model for fewshot learning,"['Jean-Baptiste Alayrac', 'Jeff Donahue', 'Pauline Luc', 'Antoine Miech', 'Iain Barr', 'Yana Hasson', 'Karel Lenc', 'Arthur Mensch', 'Katie Millican', 'Malcolm Reynolds', 'Roman Ring', 'Eliza Rutherford', 'Serkan Cabi', 'Tengda Han', 'Zhitao Gong', 'Sina Samangooei', 'Marianne Monteiro', 'Jacob Menick', 'Sebastian Borgeaud', 'Andrew Brock', 'Aida Nematzadeh', 'Sahand Sharifzadeh', 'Mikolaj Binkowski', 'Ricardo Barreira', 'Oriol Vinyals', 'Andrew Zisserman', 'Karen Simonyan']",http://arxiv.org/pdf/2204.14198v2.pdf,2022-04-29,," Building models that can be rapidly adapted to novel tasks using only ahandful of annotated examples is an open challenge for multimodal machinelearning research. We introduce Flamingo, a family of Visual Language Models(VLM) with this ability. We propose key architectural innovations to: (i)bridge powerful pretrained vision-only and language-only models, (ii) handlesequences of arbitrarily interleaved visual and textual data, and (iii)seamlessly ingest images or videos as inputs. Thanks to their flexibility,Flamingo models can be trained on large-scale multimodal web corpora containingarbitrarily interleaved text and images, which is key to endow them within-context few-shot learning capabilities. We perform a thorough evaluation ofour models, exploring and measuring their ability to rapidly adapt to a varietyof image and video tasks. These include open-ended tasks such as visualquestion-answering, where the model is prompted with a question which it has toanswer; captioning tasks, which evaluate the ability to describe a scene or anevent; and close-ended tasks such as multiple-choice visual question-answering.For tasks lying anywhere on this spectrum, a single Flamingo model can achievea new state of the art with few-shot learning, simply by prompting the modelwith task-specific examples. On numerous benchmarks, Flamingo outperformsmodels fine-tuned on thousands of times more task-specific data.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, "code generation tools (almost) for free a study of fewshot, pretrained language models on code","['Patrick Bareiß', 'Beatriz Souza', ""Marcelo d'Amorim"", 'Michael Pradel']",http://arxiv.org/pdf/2206.01335v2.pdf,2022-06-02,," Few-shot learning with large-scale, pre-trained language models is a powerfulway to answer questions about code, e.g., how to complete a given code example,or even generate code snippets from scratch. The success of these models raisesthe question whether they could serve as a basis for building a wide range codegeneration tools. Traditionally, such tools are built manually and separatelyfor each task. Instead, few-shot learning may allow to obtain different toolsfrom a single pre-trained language model by simply providing a few examples ora natural language description of the expected tool behavior. This paperstudies to what extent a state-of-the-art, pre-trained language model of code,Codex, may serve this purpose. We consider three code manipulation and codegeneration tasks targeted by a range of traditional tools: (i) code mutation;(ii) test oracle generation from natural language documentation; and (iii) testcase generation. For each task, we compare few-shot learning to a manuallybuilt tool. Our results show that the model-based tools complement (codemutation), are on par (test oracle generation), or even outperform theirrespective traditionally built tool (test case generation), while imposing farless effort to develop them. By comparing the effectiveness of differentvariants of the model-based tools, we provide insights on how to design anappropriate input (""prompt"") to the model and what influence the size of themodel has. For example, we find that providing a small natural languagedescription of the code generation task is an easy way to improve predictions.Overall, we conclude that few-shot language models are surprisingly effective,yet there is still more work to be done, such as exploring more diverse ways ofprompting and tackling even more involved tasks.",,arXiv,"['cs.se', 'cs.lg']",, discrete and soft prompting for multilingual models,"['Mengjie Zhao', 'Hinrich Schütze']",http://arxiv.org/pdf/2109.03630v1.pdf,2021-09-08,," It has been shown for English that discrete and soft prompting performstrongly in few-shot learning with pretrained language models (PLMs). In thispaper, we show that discrete and soft prompting perform better than finetuningin multilingual cases: Crosslingual transfer and in-language training ofmultilingual natural language inference. For example, with 48 English trainingexamples, finetuning obtains 33.74% accuracy in crosslingual transfer, barelysurpassing the majority baseline (33.33%). In contrast, discrete and softprompting outperform finetuning, achieving 36.43% and 38.79%. We alsodemonstrate good performance of prompting with training data in multiplelanguages other than English.",,arXiv,['cs.cl'],, sentence simplification via large language models,"['Yutao Feng', 'Jipeng Qiang', 'Yun Li', 'Yunhao Yuan', 'Yi Zhu']",http://arxiv.org/pdf/2302.11957v1.pdf,2023-02-23,," Sentence Simplification aims to rephrase complex sentences into simplersentences while retaining original meaning. Large Language models (LLMs) havedemonstrated the ability to perform a variety of natural language processingtasks. However, it is not yet known whether LLMs can be served as ahigh-quality sentence simplification system. In this work, we empiricallyanalyze the zero-/few-shot learning ability of LLMs by evaluating them on anumber of benchmark test sets. Experimental results show LLMs outperformstate-of-the-art sentence simplification methods, and are judged to be on a parwith human annotators.",,arXiv,"['cs.cl', 'cs.ai']",, gpt3 models are poor fewshot learners in the biomedical domain,"['Milad Moradi', 'Kathrin Blagec', 'Florian Haberl', 'Matthias Samwald']",http://arxiv.org/pdf/2109.02555v2.pdf,2021-09-06,," Deep neural language models have set new breakthroughs in many tasks ofNatural Language Processing (NLP). Recent work has shown that deep transformerlanguage models (pretrained on large amounts of texts) can achieve high levelsof task-specific few-shot performance comparable to state-of-the-art models.However, the ability of these large language models in few-shot transferlearning has not yet been explored in the biomedical domain. We investigatedthe performance of two powerful transformer language models, i.e. GPT-3 andBioBERT, in few-shot settings on various biomedical NLP tasks. The experimentalresults showed that, to a great extent, both the models underperform a languagemodel fine-tuned on the full training data. Although GPT-3 had already achievednear state-of-the-art results in few-shot knowledge transfer on open-domain NLPtasks, it could not perform as effectively as BioBERT, which is orders ofmagnitude smaller than GPT-3. Regarding that BioBERT was already pretrained onlarge biomedical text corpora, our study suggests that language models maylargely benefit from in-domain pretraining in task-specific few-shot learning.However, in-domain pretraining seems not to be sufficient; novel pretrainingand few-shot learning strategies are required in the biomedical NLP domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, making pretrained language models better fewshot learners,"['Tianyu Gao', 'Adam Fisch', 'Danqi Chen']",http://arxiv.org/pdf/2012.15723v2.pdf,2020-12-31,," The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shotperformance solely by leveraging a natural-language prompt and a few taskdemonstrations as input context. Inspired by their findings, we study few-shotlearning in a more practical scenario, where we use smaller language models forwhich fine-tuning is computationally efficient. We present LM-BFF--betterfew-shot fine-tuning of language models--a suite of simple and complementarytechniques for fine-tuning language models on a small number of annotatedexamples. Our approach includes (1) prompt-based fine-tuning together with anovel pipeline for automating prompt generation; and (2) a refined strategy fordynamically and selectively incorporating demonstrations into each context.Finally, we present a systematic evaluation for analyzing few-shot performanceon a range of NLP tasks, including classification and regression. Ourexperiments demonstrate that our methods combine to dramatically outperformstandard fine-tuning procedures in this low resource setting, achieving up to30% absolute improvement, and 11% on average across all tasks. Our approachmakes minimal assumptions on task resources and domain expertise, and henceconstitutes a strong task-agnostic method for few-shot learning.",,arXiv,"['cs.cl', 'cs.lg']",, list lite prompted selftraining makes parameterefficient fewshot learners,"['Yaqing Wang', 'Subhabrata Mukherjee', 'Xiaodong Liu', 'Jing Gao', 'Ahmed Hassan Awadallah', 'Jianfeng Gao']",http://arxiv.org/pdf/2110.06274v2.pdf,2021-10-12,," We present a new method LiST is short for Lite Prompted Self-Training forparameter-efficient fine-tuning of large pre-trained language models (PLMs) forfew-shot learning. LiST improves over recent methods that adopt prompt-basedfine-tuning (FN) using two key techniques. The first is the use ofself-training to leverage large amounts of unlabeled data for prompt-based FNin few-shot settings. We use self-training in conjunction with meta-learningfor re-weighting noisy pseudo-prompt labels. Self-training is expensive as itrequires updating all the model parameters repetitively. Therefore, we use asecond technique for light-weight fine-tuning where we introduce a small numberof task-specific parameters that are fine-tuned during self-training whilekeeping the PLM encoder frozen. Our experiments show that LiST can effectivelyleverage unlabeled data to improve the model performance for few-shot learning.Additionally, the fine-tuning is efficient as it only updates a smallpercentage of parameters and the overall model footprint is reduced sinceseveral tasks can share a common PLM encoder as backbone. A comprehensive studyon six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuningand 6% over prompt-based FN with 96% reduction in number of trainableparameters when fine-tuned with no more than 30 labeled examples from eachtask. With only 14M tunable parameters, LiST outperforms GPT-3 in-contextlearning by 33% on few-shot NLU tasks.",,arXiv,['cs.cl'],, fewshot stance detection via targetaware prompt distillation,"['Yan Jiang', 'Jinhua Gao', 'Huawei Shen', 'Xueqi Cheng']",http://arxiv.org/pdf/2206.13214v1.pdf,2022-06-27,," Stance detection aims to identify whether the author of a text is in favorof, against, or neutral to a given target. The main challenge of this taskcomes two-fold: few-shot learning resulting from the varying targets and thelack of contextual information of the targets. Existing works mainly focus onsolving the second issue by designing attention-based models or introducingnoisy external knowledge, while the first issue remains under-explored. In thispaper, inspired by the potential capability of pre-trained language models(PLMs) serving as knowledge bases and few-shot learners, we propose tointroduce prompt-based fine-tuning for stance detection. PLMs can provideessential contextual information for the targets and enable few-shot learningvia prompts. Considering the crucial role of the target in stance detectiontask, we design target-aware prompts and propose a novel verbalizer. Instead ofmapping each label to a concrete word, our verbalizer maps each label to avector and picks the label that best captures the correlation between thestance and the target. Moreover, to alleviate the possible defect of dealingwith varying targets with a single hand-crafted prompt, we propose to distillthe information learned from multiple prompts. Experimental results show thesuperior performance of our proposed model in both full-data and few-shotscenarios.",,arXiv,['cs.cl'],, multimodality helps unimodality crossmodal fewshot learning with multimodal models,"['Zhiqiu Lin', 'Samuel Yu', 'Zhiyi Kuang', 'Deepak Pathak', 'Deva Ramanan']",http://arxiv.org/pdf/2301.06267v4.pdf,2023-01-16,," The ability to quickly learn a new task with minimal instruction - known asfew-shot learning - is a central aspect of intelligent agents. Classicalfew-shot benchmarks make use of few-shot samples from a single modality, butsuch samples may not be sufficient to characterize an entire concept class. Incontrast, humans use cross-modal information to learn new concepts efficiently.In this work, we demonstrate that one can indeed build a better ${\bf visual}$dog classifier by ${\bf read}$ing about dogs and ${\bf listen}$ing to thembark. To do so, we exploit the fact that recent multimodal foundation modelssuch as CLIP are inherently cross-modal, mapping different modalities to thesame representation space. Specifically, we propose a simple cross-modaladaptation approach that learns from few-shot examples spanning differentmodalities. By repurposing class names as additional one-shot training samples,we achieve SOTA results with an embarrassingly simple linear classifier forvision-language adaptation. Furthermore, we show that our approach can benefitexisting methods such as prefix tuning, adapters, and classifier ensembling.Finally, to explore other modalities beyond vision and language, we constructthe first (to our knowledge) audiovisual few-shot benchmark and use cross-modaltraining to improve the performance of both image and audio classification.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",, rplkg robust prompt learning with knowledge graph,"['Yewon Kim', 'YongTaek Lim', 'Dokyung Yoon', 'KyungWoo Song']",http://arxiv.org/pdf/2304.10805v1.pdf,2023-04-21,," Large-scale pre-trained models have been known that they are transferable,and they generalize well on the unseen dataset. Recently, multimodalpre-trained models such as CLIP show significant performance improvement indiverse experiments. However, when the labeled dataset is limited, thegeneralization of a new dataset or domain is still challenging. To improve thegeneralization performance on few-shot learning, there have been diverseefforts, such as prompt learning and adapter. However, the current few-shotadaptation methods are not interpretable, and they require a high computationcost for adaptation. In this study, we propose a new method, robust promptlearning with knowledge graph (RPLKG). Based on the knowledge graph, weautomatically design diverse interpretable and meaningful prompt sets. Ourmodel obtains cached embeddings of prompt sets after one forwarding from alarge pre-trained model. After that, model optimizes the prompt selectionprocesses with GumbelSoftmax. In this way, our model is trained usingrelatively little memory and learning time. Also, RPLKG selects the optimalinterpretable prompt automatically, depending on the dataset. In summary, RPLKGis i) interpretable, ii) requires small computation resources, and iii) easy toincorporate prior human knowledge. To validate the RPLKG, we providecomprehensive experimental results on few-shot learning, domain generalizationand new class generalization setting. RPLKG shows a significant performanceimprovement compared to zero-shot learning and competitive performance againstseveral prompt learning methods using much lower resources.",,arXiv,"['cs.ai', 'cs.lg']",, adversarial robustness of promptbased fewshot learning for natural language understanding,"['Venkata Prabhakara Sarath Nookala', 'Gaurav Verma', 'Subhabrata Mukherjee', 'Srijan Kumar']",http://arxiv.org/pdf/2306.11066v2.pdf,2023-06-19,," State-of-the-art few-shot learning (FSL) methods leverage prompt-basedfine-tuning to obtain remarkable results for natural language understanding(NLU) tasks. While much of the prior FSL methods focus on improving downstreamtask performance, there is a limited understanding of the adversarialrobustness of such methods. In this work, we conduct an extensive study ofseveral state-of-the-art FSL methods to assess their robustness to adversarialperturbations. To better understand the impact of various factors towardsrobustness (or the lack of it), we evaluate prompt-based FSL methods againstfully fine-tuned models for aspects such as the use of unlabeled data, multipleprompts, number of few-shot examples, model size and type. Our results on sixGLUE tasks indicate that compared to fully fine-tuned models, vanilla FSLmethods lead to a notable relative drop in task performance (i.e., are lessrobust) in the face of adversarial perturbations. However, using (i) unlabeleddata for prompt-based FSL and (ii) multiple prompts flip the trend. We furtherdemonstrate that increasing the number of few-shot examples and model size leadto increased adversarial robustness of vanilla FSL methods. Broadly, our worksheds light on the adversarial robustness evaluation of prompt-based FSLmethods for NLU tasks.",,arXiv,"['cs.cl', 'cs.lg']",, unifiedskg unifying and multitasking structured knowledge grounding with texttotext language models,"['Tianbao Xie', 'Chen Henry Wu', 'Peng Shi', 'Ruiqi Zhong', 'Torsten Scholak', 'Michihiro Yasunaga', 'Chien-Sheng Wu', 'Ming Zhong', 'Pengcheng Yin', 'Sida I. Wang', 'Victor Zhong', 'Bailin Wang', 'Chengzu Li', 'Connor Boyle', 'Ansong Ni', 'Ziyu Yao', 'Dragomir Radev', 'Caiming Xiong', 'Lingpeng Kong', 'Rui Zhang', 'Noah A. Smith', 'Luke Zettlemoyer', 'Tao Yu']",http://arxiv.org/pdf/2201.05966v3.pdf,2022-01-16,," Structured knowledge grounding (SKG) leverages structured knowledge tocomplete user requests, such as semantic parsing over databases and questionanswering over knowledge bases. Since the inputs and outputs of SKG tasks areheterogeneous, they have been studied separately by different communities,which limits systematic and compatible research on SKG. In this paper, weovercome this limitation by proposing the UnifiedSKG framework, which unifies21 SKG tasks into a text-to-text format, aiming to promote systematic SKGresearch, instead of being exclusive to a single task, domain, or dataset. Weuse UnifiedSKG to benchmark T5 with different sizes and show that T5, withsimple modifications when necessary, achieves state-of-the-art performance onalmost all of the 21 tasks. We further demonstrate that multi-taskprefix-tuning improves the performance on most tasks, largely improving theoverall performance. UnifiedSKG also facilitates the investigation of zero-shotand few-shot learning, and we show that T0, GPT-3, and Codex struggle inzero-shot and few-shot learning for SKG. We also use UnifiedSKG to conduct aseries of controlled experiments on structured knowledge encoding variantsacross SKG tasks. UnifiedSKG is easily extensible to more tasks, and it isopen-sourced at https://github.com/hkunlp/unifiedskg.",,arXiv,['cs.cl'],, a promptbased fewshot learning approach to software conflict detection,"['Robert K. Helmeczi', 'Mucahit Cevik', 'Savas Yıldırım']",http://arxiv.org/pdf/2211.02709v1.pdf,2022-11-04,," A software requirement specification (SRS) document is an essential part ofthe software development life cycle which outlines the requirements that asoftware program in development must satisfy. This document is often specifiedby a diverse group of stakeholders and is subject to continual change, makingthe process of maintaining the document and detecting conflicts betweenrequirements an essential task in software development. Notably, projects thatdo not address conflicts in the SRS document early on face considerableproblems later in the development life cycle. These problems incur substantialcosts in terms of time and money, and these costs often become insurmountablebarriers that ultimately result in the termination of a software projectaltogether. As a result, early detection of SRS conflicts is critical toproject sustainability. The conflict detection task is approached in numerousways, many of which require a significant amount of manual intervention fromdevelopers, or require access to a large amount of labeled, task-specifictraining data. In this work, we propose using a prompt-based learning approachto perform few-shot learning for conflict detection. We compare our results tosupervised learning approaches that use pretrained language models, such asBERT and its variants. Our results show that prompting with just 32 labeledexamples can achieve a similar level of performance in many key metrics to thatof supervised learning on training sets that are magnitudes larger in size. Incontrast to many other conflict detection approaches, we make no assumptionsabout the type of underlying requirements, allowing us to analyze pairings ofboth functional and non-functional requirements. This allows us to omit thepotentially expensive task of filtering out non-functional requirements fromour dataset.",,arXiv,['cs.se'],, noisy channel language model prompting for fewshot text classification,"['Sewon Min', 'Mike Lewis', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2108.04106v3.pdf,2021-08-09,," We introduce a noisy channel approach for language model prompting infew-shot text classification. Instead of computing the likelihood of the labelgiven the input (referred as direct models), channel models compute theconditional probability of the input given the label, and are thereby requiredto explain every word in the input. We use channel models for recently proposedfew-shot learning methods with no or very limited updates to the language modelparameters, via either in-context demonstration or prompt tuning. Ourexperiments show that, for both methods, channel models significantlyoutperform their direct counterparts, which we attribute to their stability,i.e., lower variance and higher worst-case accuracy. We also present extensiveablations that provide recommendations for when to use channel prompt tuninginstead of other competitive methods (e.g., direct head tuning): channel prompttuning is preferred when the number of training examples is small, labels inthe training data are imbalanced, or generalization to unseen labels isrequired.",,arXiv,"['cs.cl', 'cs.ai']",, conqx semantic expansion of spoken queries for intent detection based on conditioned text generation,"['Eyup Halit Yilmaz', 'Cagri Toraman']",http://arxiv.org/pdf/2109.00729v1.pdf,2021-09-02,," Intent detection of spoken queries is a challenging task due to their noisystructure and short length. To provide additional information regarding thequery and enhance the performance of intent detection, we propose a method forsemantic expansion of spoken queries, called ConQX, which utilizes the textgeneration ability of an auto-regressive language model, GPT-2. To avoidoff-topic text generation, we condition the input query to a structured contextwith prompt mining. We then apply zero-shot, one-shot, and few-shot learning.We lastly use the expanded queries to fine-tune BERT and RoBERTa for intentdetection. The experimental results show that the performance of intentdetection can be improved by our semantic expansion method.",,arXiv,"['cs.cl', 'cs.ai']",, do promptbased models really understand the meaning of their prompts,"['Albert Webson', 'Ellie Pavlick']",http://arxiv.org/pdf/2109.01247v2.pdf,2021-09-02,," Recently, a boom of papers has shown extraordinary progress in zero-shot andfew-shot learning with various prompt-based models. It is commonly argued thatprompts help models to learn faster in the same way that humans learn fasterwhen provided with task instructions expressed in natural language. In thisstudy, we experiment with over 30 prompt templates manually written for naturallanguage inference (NLI). We find that models learn just as fast with manyprompts that are intentionally irrelevant or even pathologically misleading asthey do with instructively ""good"" prompts. Further, such patterns hold even formodels as large as 175 billion parameters (Brown et al., 2020) as well as therecently proposed instruction-tuned models which are trained on hundreds ofprompts (Sanh et al., 2022). That is, instruction-tuned models often producegood predictions with irrelevant and misleading prompts even at zero shots. Insum, notwithstanding prompt-based models' impressive improvement, we findevidence of serious limitations that question the degree to which suchimprovement is derived from models understanding task instructions in waysanalogous to humans' use of task instructions.",,arXiv,['cs.cl'],, fewshot emotion recognition in conversation with sequential prototypical networks,"['Gaël Guibon', 'Matthieu Labeau', 'Hélène Flamein', 'Luce Lefeuvre', 'Chloé Clavel']",http://arxiv.org/pdf/2109.09366v1.pdf,2021-09-20,," Several recent studies on dyadic human-human interactions have been done onconversations without specific business objectives. However, many companiesmight benefit from studies dedicated to more precise environments such as aftersales services or customer satisfaction surveys. In this work, we placeourselves in the scope of a live chat customer service in which we want todetect emotions and their evolution in the conversation flow. This contextleads to multiple challenges that range from exploiting restricted, small andmostly unlabeled datasets to finding and adapting methods for such context.Wetackle these challenges by using Few-Shot Learning while making the hypothesisit can serve conversational emotion classification for different languages andsparse labels. We contribute by proposing a variation of Prototypical Networksfor sequence labeling in conversation that we name ProtoSeq. We test thismethod on two datasets with different languages: daily conversations in Englishand customer service chat conversations in French. When applied to emotionclassification in conversations, our method proved to be competitive even whencompared to other ones.",,arXiv,"['cs.cl', 'cs.lg']",, "crosslingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing","['Tal Schuster', 'Ori Ram', 'Regina Barzilay', 'Amir Globerson']",http://arxiv.org/pdf/1902.09492v2.pdf,2019-02-25,," We introduce a novel method for multilingual transfer that utilizes deepcontextual embeddings, pretrained in an unsupervised fashion. While contextualembeddings have been shown to yield richer representations of meaning comparedto their static counterparts, aligning them poses a challenge due to theirdynamic nature. To this end, we construct context-independent variants of theoriginal monolingual spaces and utilize their mapping to derive an alignmentfor the context-dependent spaces. This mapping readily supports processing of atarget language, improving transfer by context-aware embeddings. Ourexperimental results demonstrate the effectiveness of this approach forzero-shot and few-shot learning of dependency parsing. Specifically, our methodconsistently outperforms the previous state-of-the-art on 6 tested languages,yielding an improvement of 6.8 LAS points on average.",,arXiv,"['cs.cl', 'cs.lg']",, calibrate before use improving fewshot performance of language models,"['Tony Z. Zhao', 'Eric Wallace', 'Shi Feng', 'Dan Klein', 'Sameer Singh']",http://arxiv.org/pdf/2102.09690v2.pdf,2021-02-19,," GPT-3 can perform numerous tasks when provided a natural language prompt thatcontains a few training examples. We show that this type of few-shot learningcan be unstable: the choice of prompt format, training examples, and even theorder of the training examples can cause accuracy to vary from near chance tonear state-of-the-art. We demonstrate that this instability arises from thebias of language models towards predicting certain answers, e.g., those thatare placed near the end of the prompt or are common in the pre-training data.To mitigate this, we first estimate the model's bias towards each answer byasking for its prediction when given the training prompt and a content-freetest input such as ""N/A"". We then fit calibration parameters that cause theprediction for this input to be uniform across answers. On a diverse set oftasks, this contextual calibration procedure substantially improves GPT-3 andGPT-2's average accuracy (up to 30.0% absolute) and reduces variance acrossdifferent choices of the prompt.",,arXiv,"['cs.cl', 'cs.lg']",, what's in a measurement using gpt3 on semeval 2021 task 8 measeval,"['Curt Kohler', 'Ron Daniel Jr']",http://arxiv.org/pdf/2106.14720v1.pdf,2021-06-28,," In the summer of 2020 OpenAI released its GPT-3 autoregressive language modelto much fanfare. While the model has shown promise on tasks in several areas,it has not always been clear when the results were cherry-picked or when theywere the unvarnished output. We were particularly interested in what benefitsGPT-3 could bring to the SemEval 2021 MeasEval task - identifying measurementsand their associated attributes in scientific literature. We had alreadyexperimented with multi-turn questions answering as a solution to this task. Wewanted to see if we could use GPT-3's few-shot learning capabilities to moreeasily develop a solution that would have better performance than our priorwork. Unfortunately, we have not been successful in that effort. This paperdiscusses the approach we used, challenges we encountered, and results weobserved. Some of the problems we encountered were simply due to the state ofthe art. For example, the limits on the size of the prompt and answer limitedthe amount of the training signal that could be offered. Others are morefundamental. We are unaware of generative models that excel in retainingfactual information. Also, the impact of changes in the prompts isunpredictable, making it hard to reliably improve performance.",,arXiv,['cs.cl'],, flex unifying evaluation for fewshot nlp,"['Jonathan Bragg', 'Arman Cohan', 'Kyle Lo', 'Iz Beltagy']",http://arxiv.org/pdf/2107.07170v2.pdf,2021-07-15,," Few-shot NLP research is highly active, yet conducted in disjoint researchthreads with evaluation suites that lack challenging-yet-realistic testingsetups and fail to employ careful experimental design. Consequently, thecommunity does not know which techniques perform best or even if theyoutperform simple baselines. In response, we formulate the FLEX Principles, aset of requirements and best practices for unified, rigorous, valid, andcost-sensitive few-shot NLP evaluation. These principles include Sample SizeDesign, a novel approach to benchmark design that optimizes statisticalaccuracy and precision while keeping evaluation costs manageable. Following theprinciples, we release the FLEX benchmark, which includes four few-shottransfer settings, zero-shot evaluation, and a public leaderboard that coversdiverse NLP tasks. In addition, we present UniFew, a prompt-based model forfew-shot learning that unifies pretraining and finetuning prompt formats,eschewing complex machinery of recent prompt-based approaches in adaptingdownstream task formats to language model pretraining objectives. Wedemonstrate that despite simplicity, UniFew achieves results competitive withboth popular meta-learning and prompt-based approaches.",,arXiv,"['cs.cl', 'cs.lg', 'i.2.7']",, useridentifier implicit user representations for simple and effective personalized sentiment analysis,"['Fatemehsadat Mireshghallah', 'Vaishnavi Shrivastava', 'Milad Shokouhi', 'Taylor Berg-Kirkpatrick', 'Robert Sim', 'Dimitrios Dimitriadis']",http://arxiv.org/pdf/2110.00135v2.pdf,2021-10-01,," Global models are trained to be as generalizable as possible, with userinvariance considered desirable since the models are shared across multitudesof users. As such, these models are often unable to produce personalizedresponses for individual users, based on their data. Contrary to widely-usedpersonalization techniques based on few-shot learning, we proposeUserIdentifier, a novel scheme for training a single shared model for allusers. Our approach produces personalized responses by adding fixed,non-trainable user identifiers to the input data. We empirically demonstratethat this proposed method outperforms the prefix-tuning based state-of-the-artapproach by up to 13%, on a suite of sentiment analysis datasets. We also showthat, unlike prior work, this method needs neither any additional modelparameters nor any extra rounds of few-shot fine-tuning.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, instanceaware prompt learning for language understanding and generation,"['Feihu Jin', 'Jinliang Lu', 'Jiajun Zhang', 'Chengqing Zong']",http://arxiv.org/pdf/2201.07126v1.pdf,2022-01-18,," Recently, prompt learning has become a new paradigm to utilize pre-trainedlanguage models (PLMs) and achieves promising results in downstream tasks witha negligible increase of parameters. The current usage of discrete andcontinuous prompts assumes that the prompt is fixed for a specific task and allsamples in the task share the same prompt. However, a task may contain quitediverse samples in which some are easy and others are difficult, and diverseprompts are desirable. In this paper, we propose an instance-aware promptlearning method that learns a different prompt for each instance. Specifically,we suppose that each learnable prompt token has a different contribution todifferent instances, and we learn the contribution by calculating the relevancescore between an instance and each prompt token. The contribution weightedprompt would be instance aware. We apply our method to both unidirectional andbidirectional PLMs on both language understanding and generation tasks.Extensive experiments demonstrate that our method obtains considerableimprovements compared to strong baselines. Especially, our method achieves thestate-of-the-art on the SuperGLUE few-shot learning benchmark.",,arXiv,['cs.cl'],, generating training data with language models towards zeroshot language understanding,"['Yu Meng', 'Jiaxin Huang', 'Yu Zhang', 'Jiawei Han']",http://arxiv.org/pdf/2202.04538v2.pdf,2022-02-09,," Pretrained language models (PLMs) have demonstrated remarkable performance invarious natural language processing tasks: Unidirectional PLMs (e.g., GPT) arewell known for their superior text generation capabilities; bidirectional PLMs(e.g., BERT) have been the prominent choice for natural language understanding(NLU) tasks. While both types of models have achieved promising few-shotlearning performance, their potential for zero-shot learning has beenunderexplored. In this paper, we present a simple approach that uses both typesof PLMs for fully zero-shot learning of NLU tasks without requiring anytask-specific data: A unidirectional PLM generates class-conditioned textsguided by prompts, which are used as the training data for fine-tuning abidirectional PLM. With quality training data selected based on the generationprobability and regularization techniques (label smoothing and temporalensembling) applied to the fine-tuning stage for better generalization andstability, our approach demonstrates strong performance across sevenclassification tasks of the GLUE benchmark (e.g., 72.3/73.8 on MNLI-m/mm and92.8 on SST-2), significantly outperforming zero-shot prompting methods andachieving even comparable results to strong few-shot approaches using 32training samples per class.",,arXiv,"['cs.cl', 'cs.lg']",, variational autoencoder with disentanglement priors for lowresource taskspecific natural language generation,"['Zhuang Li', 'Lizhen Qu', 'Qiongkai Xu', 'Tongtong Wu', 'Tianyang Zhan', 'Gholamreza Haffari']",http://arxiv.org/pdf/2202.13363v3.pdf,2022-02-27,," In this paper, we propose a variational autoencoder with disentanglementpriors, VAE-DPRIOR, for task-specific natural language generation with none ora handful of task-specific labeled examples. In order to tackle compositionalgeneralization across tasks, our model performs disentangled representationlearning by introducing a conditional prior for the latent content space andanother conditional prior for the latent label space. Both types of priorssatisfy a novel property called $\epsilon$-disentangled. We show bothempirically and theoretically that the novel priors can disentanglerepresentations even without specific regularizations as in the prior work. Thecontent prior enables directly sampling diverse content representations fromthe content space learned from the seen tasks, and fuse them with therepresentations of novel tasks for generating semantically diverse texts in thelow-resource settings. Our extensive experiments demonstrate the superiorperformance of our model over competitive baselines in terms of i) dataaugmentation in continuous zero/few-shot learning, and ii) text style transferin the few-shot setting.",,arXiv,['cs.cl'],, claret pretraining a correlationaware contexttoevent transformer for eventcentric generation and classification,"['Yucheng Zhou', 'Tao Shen', 'Xiubo Geng', 'Guodong Long', 'Daxin Jiang']",http://arxiv.org/pdf/2203.02225v2.pdf,2022-03-04,," Generating new events given context with correlated ones plays a crucial rolein many event-centric reasoning tasks. Existing works either limit their scopeto specific scenarios or overlook event-level correlations. In this paper, wepropose to pre-train a general Correlation-aware context-to-Event Transformer(ClarET) for event-centric reasoning. To achieve this, we propose three novelevent-centric objectives, i.e., whole event recovering, contrastiveevent-correlation encoding and prompt-based event locating, which highlightevent-level correlations with effective training. The proposed ClarET isapplicable to a wide range of event-centric reasoning scenarios, consideringits versatility of (i) event-correlation types (e.g., causal, temporal,contrast), (ii) application formulations (i.e., generation and classification),and (iii) reasoning types (e.g., abductive, counterfactual and endingreasoning). Empirical fine-tuning results, as well as zero- and few-shotlearning, on 9 benchmarks (5 generation and 4 classification tasks covering 4reasoning types with diverse event correlations), verify its effectiveness andgeneralization ability.",,arXiv,['cs.cl'],, pretrained tokenreplaced detection model as fewshot learner,"['Zicheng Li', 'Shoushan Li', 'Guodong Zhou']",http://arxiv.org/pdf/2203.03235v2.pdf,2022-03-07,," Pre-trained masked language models have demonstrated remarkable ability asfew-shot learners. In this paper, as an alternative, we propose a novelapproach to few-shot learning with pre-trained token-replaced detection modelslike ELECTRA. In this approach, we reformulate a classification or a regressiontask as a token-replaced detection problem. Specifically, we first define atemplate and label description words for each task and put them into the inputto form a natural language prompt. Then, we employ the pre-trainedtoken-replaced detection model to predict which label description word is themost original (i.e., least replaced) among all label description words in theprompt. A systematic evaluation on 16 datasets demonstrates that our approachoutperforms few-shot learners with pre-trained masked language models in bothone-sentence and two-sentence learning tasks.",,arXiv,"['cs.cl', 'cs.ai']",, prototypical verbalizer for promptbased fewshot tuning,"['Ganqu Cui', 'Shengding Hu', 'Ning Ding', 'Longtao Huang', 'Zhiyuan Liu']",http://arxiv.org/pdf/2203.09770v1.pdf,2022-03-18,," Prompt-based tuning for pre-trained language models (PLMs) has shown itseffectiveness in few-shot learning. Typically, prompt-based tuning wraps theinput text into a cloze question. To make predictions, the model maps theoutput words to labels via a verbalizer, which is either manually designed orautomatically built. However, manual verbalizers heavily depend ondomain-specific prior knowledge and human efforts, while finding appropriatelabel words automatically still remains challenging.In this work, we proposethe prototypical verbalizer (ProtoVerb) which is built directly from trainingdata. Specifically, ProtoVerb learns prototype vectors as verbalizers bycontrastive learning. In this way, the prototypes summarize training instancesand are able to enclose rich class-level semantics. We conduct experiments onboth topic classification and entity typing tasks, and the results demonstratethat ProtoVerb significantly outperforms current automatic verbalizers,especially when training data is extremely scarce. More surprisingly, ProtoVerbconsistently boosts prompt-based tuning even on untuned PLMs, indicating anelegant non-tuning way to utilize PLMs. Our codes are avaliable athttps://github.com/thunlp/OpenPrompt.",,arXiv,"['cs.cl', 'cs.lg']",, inverse is better! fast and accurate prompt for fewshot slot tagging,"['Yutai Hou', 'Cheng Chen', 'Xianzhen Luo', 'Bohan Li', 'Wanxiang Che']",http://arxiv.org/pdf/2204.00885v1.pdf,2022-04-02,," Prompting methods recently achieve impressive success in few-shot learning.These methods modify input samples with prompt sentence pieces, and decodelabel tokens to map samples to corresponding labels. However, such a paradigmis very inefficient for the task of slot tagging. Since slot tagging samplesare multiple consecutive words in a sentence, the prompting methods have toenumerate all n-grams token spans to find all the possible slots, which greatlyslows down the prediction. To tackle this, we introduce an inverse paradigm forprompting. Different from the classic prompts mapping tokens to labels, wereversely predict slot values given slot types. Such inverse prompting onlyrequires a one-turn prediction for each slot type and greatly speeds up theprediction. Besides, we propose a novel Iterative Prediction Strategy, fromwhich the model learns to refine predictions by considering the relationsbetween different slot types. We find, somewhat surprisingly, the proposedmethod not only predicts faster but also significantly improves the effect(improve over 6.1 F1-scores on 10-shot setting) and achieves newstate-of-the-art performance.",,arXiv,"['cs.cl', 'cs.ai']",, leveraging pretrained language models for conversational information seeking from text,"['Patrizio Bellan', 'Mauro Dragoni', 'Chiara Ghidini']",http://arxiv.org/pdf/2204.03542v1.pdf,2022-03-31,," Recent advances in Natural Language Processing, and in particular on theconstruction of very large pre-trained language representation models, isopening up new perspectives on the construction of conversational informationseeking (CIS) systems. In this paper we investigate the usage of in-contextlearning and pre-trained language representation models to address the problemof information extraction from process description documents, in an incrementalquestion and answering oriented fashion. In particular we investigate the usageof the native GPT-3 (Generative Pre-trained Transformer 3) model, together withtwo in-context learning customizations that inject conceptual definitions and alimited number of samples in a few shot-learning fashion. The results highlightthe potential of the approach and the usefulness of the in-context learningcustomizations, which can substantially contribute to address the ""trainingdata challenge"" of deep learning based NLP techniques the BPM field. It alsohighlight the challenge posed by control flow relations for which furthertraining needs to be devised.",,arXiv,"['cs.cl', 'cs.ai']",, superprompting utilizing modelindependent contextual data to reduce data annotation required in visual commonsense tasks,"['Navid Rezaei', 'Marek Z. Reformat']",http://arxiv.org/pdf/2204.11922v1.pdf,2022-04-25,," Pre-trained language models have shown excellent results in few-shot learningscenarios using in-context learning. Although it is impressive, the size oflanguage models can be prohibitive to make them usable in on-deviceapplications, such as sensors or smartphones. With smaller language models,task-specific data annotation is needed to fine-tune the language model for aspecific purpose. However, data annotation can have a substantial financial andtime burden for small research groups, startups, and even companies. In thispaper, we analyze different prompt-based fine-tuning techniques to improveresults on both language and multimodal causal transformer models. To evaluateour results, we use a dataset focusing on visual commonsense reasoning in time.Our results show that by simple model-agnostic prompt-based fine-tuning,comparable results can be reached by only using 35%-40% of the fine-tuningtraining dataset. The proposed approaches result in significant time andfinancial savings. As the proposed methods make minimal architecturalassumptions, other researchers can use the results in their transformer modelswith minimal adaptations. We plan to release the source code freely to make iteasier for the community to use and contribute to our work.",,arXiv,"['cs.cl', 'cs.ai']",, building a role specified opendomain dialogue system leveraging largescale language models,"['Sanghwan Bae', 'Donghyun Kwak', 'Sungdong Kim', 'Donghoon Ham', 'Soyoung Kang', 'Sang-Woo Lee', 'Woomyoung Park']",http://arxiv.org/pdf/2205.00176v1.pdf,2022-04-30,," Recent open-domain dialogue models have brought numerous breakthroughs.However, building a chat system is not scalable since it often requires aconsiderable volume of human-human dialogue data, especially when enforcingfeatures such as persona, style, or safety. In this work, we study thechallenge of imposing roles on open-domain dialogue systems, with the goal ofmaking the systems maintain consistent roles while conversing naturally withhumans. To accomplish this, the system must satisfy a role specification thatincludes certain conditions on the stated features as well as a system policyon whether or not certain types of utterances are allowed. For this, we proposean efficient data collection framework leveraging in-context few-shot learningof large-scale language models for building role-satisfying dialogue datasetfrom scratch. We then compare various architectures for open-domain dialoguesystems in terms of meeting role specifications while maintainingconversational abilities. Automatic and human evaluations show that our modelsreturn few out-of-bounds utterances, keeping competitive performance on generalmetrics. We release a Korean dialogue dataset we built for further research.",,arXiv,['cs.cl'],, easynlp a comprehensive and easytouse toolkit for natural language processing,"['Chengyu Wang', 'Minghui Qiu', 'Chen Shi', 'Taolin Zhang', 'Tingting Liu', 'Lei Li', 'Jianing Wang', 'Ming Wang', 'Jun Huang', 'Wei Lin']",http://arxiv.org/pdf/2205.00258v2.pdf,2022-04-30,," The success of Pre-Trained Models (PTMs) has reshaped the development ofNatural Language Processing (NLP). Yet, it is not easy to obtainhigh-performing models and deploy them online for industrial practitioners. Tobridge this gap, EasyNLP is designed to make it easy to build NLP applications,which supports a comprehensive suite of NLP algorithms. It further featuresknowledge-enhanced pre-training, knowledge distillation and few-shot learningfunctionalities for large-scale PTMs, and provides a unified framework of modeltraining, inference and deployment for real-world applications. Currently,EasyNLP has powered over ten business units within Alibaba Group and isseamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud.The source code of our EasyNLP toolkit is released at GitHub(https://github.com/alibaba/EasyNLP).",,arXiv,['cs.cl'],, politics pretraining with samestory article comparison for ideology prediction and stance detection,"['Yujian Liu', 'Xinliang Frederick Zhang', 'David Wegsman', 'Nick Beauchamp', 'Lu Wang']",http://arxiv.org/pdf/2205.00619v1.pdf,2022-05-02,," Ideology is at the core of political science research. Yet, there still doesnot exist general-purpose tools to characterize and predict ideology acrossdifferent genres of text. To this end, we study Pretrained Language Modelsusing novel ideology-driven pretraining objectives that rely on the comparisonof articles on the same story written by media of different ideologies. Wefurther collect a large-scale dataset, consisting of more than 3.6M politicalnews articles, for pretraining. Our model POLITICS outperforms strong baselinesand the previous state-of-the-art models on ideology prediction and stancedetection tasks. Further analyses show that POLITICS is especially good atunderstanding long or formally written texts, and is also robust in few-shotlearning scenarios.",,arXiv,['cs.cl'],, kecp knowledge enhanced contrastive prompting for fewshot extractive question answering,"['Jianing Wang', 'Chengyu Wang', 'Minghui Qiu', 'Qiuhui Shi', 'Hongbin Wang', 'Jun Huang', 'Ming Gao']",http://arxiv.org/pdf/2205.03071v1.pdf,2022-05-06,," Extractive Question Answering (EQA) is one of the most important tasks inMachine Reading Comprehension (MRC), which can be solved by fine-tuning thespan selecting heads of Pre-trained Language Models (PLMs). However, mostexisting approaches for MRC may perform poorly in the few-shot learningscenario. To solve this issue, we propose a novel framework named KnowledgeEnhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads toPLMs, we introduce a seminal paradigm for EQA that transform the task into anon-autoregressive Masked Language Modeling (MLM) generation problem.Simultaneously, rich semantics from the external knowledge base (KB) and thepassage context are support for enhancing the representations of the query. Inaddition, to boost the performance of PLMs, we jointly train the model by theMLM and contrastive learning objectives. Experiments on multiple benchmarksdemonstrate that our method consistently outperforms state-of-the-artapproaches in few-shot settings by a large margin.",,arXiv,"['cs.cl', 'cs.ai']",, proqa structural promptbased pretraining for unified question answering,"['Wanjun Zhong', 'Yifan Gao', 'Ning Ding', 'Yujia Qin', 'Zhiyuan Liu', 'Ming Zhou', 'Jiahai Wang', 'Jian Yin', 'Nan Duan']",http://arxiv.org/pdf/2205.04040v2.pdf,2022-05-09,," Question Answering (QA) is a longstanding challenge in natural languageprocessing. Existing QA works mostly focus on specific question types,knowledge domains, or reasoning skills. The specialty in QA research hinderssystems from modeling commonalities between tasks and generalization for widerapplications. To address this issue, we present ProQA, a unified QA paradigmthat solves various tasks through a single model. ProQA takes a unifiedstructural prompt as the bridge and improves the QA-centric ability bystructural prompt-based pre-training. Through a structurally designedprompt-based input schema, ProQA concurrently models the knowledgegeneralization for all QA tasks while keeping the knowledge customization forevery specific QA task. Furthermore, ProQA is pre-trained with structuralprompt-formatted large-scale synthesized corpus, which empowers the model withthe commonly-required QA ability. Experimental results on 11 QA benchmarksdemonstrate that ProQA consistently boosts performance on both full datafine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore,ProQA exhibits strong ability in both continual learning and transfer learningby taking the advantages of the structural prompt.",,arXiv,['cs.cl'],, allsh active learning guided by local sensitivity and hardness,"['Shujian Zhang', 'Chengyue Gong', 'Xingchao Liu', 'Pengcheng He', 'Weizhu Chen', 'Mingyuan Zhou']",http://arxiv.org/pdf/2205.04980v2.pdf,2022-05-10,," Active learning, which effectively collects informative unlabeled data forannotation, reduces the demand for labeled data. In this work, we propose toretrieve unlabeled samples with a local sensitivity and hardness-awareacquisition function. The proposed method generates data copies through localperturbations and selects data points whose predictive likelihoods diverge themost from their copies. We further empower our acquisition function byinjecting the select-worst case perturbation. Our method achieves consistentgains over the commonly used active learning strategies in variousclassification tasks. Furthermore, we observe consistent improvements over thebaselines on the study of prompt selection in prompt-based few-shot learning.These experiments demonstrate that our acquisition guided by local sensitivityand hardness can be effective and beneficial for many NLP tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, prototypical calibration for fewshot learning of language models,"['Zhixiong Han', 'Yaru Hao', 'Li Dong', 'Yutao Sun', 'Furu Wei']",http://arxiv.org/pdf/2205.10183v2.pdf,2022-05-20,," In-context learning of GPT-like models has been recognized as fragile acrossdifferent hand-crafted templates, and demonstration permutations. In this work,we propose prototypical calibration to adaptively learn a more robust decisionboundary for zero- and few-shot classification, instead of greedy decoding.Concretely, our method first adopts Gaussian mixture distribution to estimatethe prototypical clusters for all categories. Then we assign each cluster tothe corresponding label by solving a weighted bipartite matching problem. Givenan example, its prediction is calibrated by the likelihood of prototypicalclusters. Experimental results show that prototypical calibration yields asubstantial improvement on a diverse set of tasks. Extensive analysis acrossdifferent scales also indicates that our method calibrates the decisionboundary as expected, greatly improving the robustness of GPT to templates,permutations, and class imbalance.",,arXiv,['cs.cl'],, bbtv2 towards a gradientfree future with large language models,"['Tianxiang Sun', 'Zhengfu He', 'Hong Qian', 'Yunhua Zhou', 'Xuanjing Huang', 'Xipeng Qiu']",http://arxiv.org/pdf/2205.11200v2.pdf,2022-05-23,," Most downstream adaptation methods tune all or part of the parameters ofpre-trained models (PTMs) through gradient descent, where the tuning costincreases linearly with the growth of the model size. By contrast,gradient-free methods only require the forward computation of the PTM to tunethe prompt, retaining the benefits of efficient tuning and deployment. Though,past work on gradient-free tuning often introduces gradient descent to seek agood initialization of prompt and lacks versatility across tasks and PTMs. Inthis paper, we present BBTv2, an improved version of Black-Box Tuning, to drivePTMs for few-shot learning. We prepend continuous prompts to every layer of thePTM and propose a divide-and-conquer gradient-free algorithm to optimize theprompts at different layers alternately. Extensive experiments across varioustasks and PTMs show that BBTv2 can achieve comparable performance to full modeltuning and state-of-the-art parameter-efficient methods (e.g., Adapter, LoRA,BitFit, etc.) under few-shot settings while maintaining much fewer tunableparameters.",,arXiv,"['cs.cl', 'cs.ai']",, neural prompt search,"['Yuanhan Zhang', 'Kaiyang Zhou', 'Ziwei Liu']",http://arxiv.org/pdf/2206.04673v2.pdf,2022-06-09,," The size of vision models has grown exponentially over the last few years,especially after the emergence of Vision Transformer. This has motivated thedevelopment of parameter-efficient tuning methods, such as learning adapterlayers or visual prompt tokens, which allow a tiny portion of model parametersto be trained whereas the vast majority obtained from pre-training are frozen.However, designing a proper tuning method is non-trivial: one might need to tryout a lengthy list of design choices, not to mention that each downstreamdataset often requires custom designs. In this paper, we view the existingparameter-efficient tuning methods as ""prompt modules"" and propose NeuralprOmpt seArcH (NOAH), a novel approach that learns, for large vision models,the optimal design of prompt modules through a neural architecture searchalgorithm, specifically for each downstream dataset. By conducting extensiveexperiments on over 20 vision datasets, we demonstrate that NOAH (i) issuperior to individual prompt modules, (ii) has a good few-shot learningability, and (iii) is domain-generalizable. The code and models are availableat https://github.com/Davidzhangyuanhan/NOAH.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, prompting decision transformer for fewshot policy generalization,"['Mengdi Xu', 'Yikang Shen', 'Shun Zhang', 'Yuchen Lu', 'Ding Zhao', 'Joshua B. Tenenbaum', 'Chuang Gan']",http://arxiv.org/pdf/2206.13499v1.pdf,2022-06-27,," Humans can leverage prior experience and learn novel tasks from a handful ofdemonstrations. In contrast to offline meta-reinforcement learning, which aimsto achieve quick adaptation through better algorithm design, we investigate theeffect of architecture inductive bias on the few-shot learning capability. Wepropose a Prompt-based Decision Transformer (Prompt-DT), which leverages thesequential modeling ability of the Transformer architecture and the promptframework to achieve few-shot adaptation in offline RL. We design thetrajectory prompt, which contains segments of the few-shot demonstrations, andencodes task-specific information to guide policy generation. Our experimentsin five MuJoCo control benchmarks show that Prompt-DT is a strong few-shotlearner without any extra finetuning on unseen target tasks. Prompt-DToutperforms its variants and strong meta offline RL baselines by a large marginwith a trajectory prompt containing only a few timesteps. Prompt-DT is alsorobust to prompt length changes and can generalize to out-of-distribution (OOD)environments.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ro']",, fewshot training llms for projectspecific codesummarization,"['Toufique Ahmed', 'Premkumar Devanbu']",http://arxiv.org/pdf/2207.04237v2.pdf,2022-07-09,," Very large language models (LLMs), such as GPT-3 and Codex have achievedstate-of-the-art performance on several natural-language tasks, and show greatpromise also for code. A particularly exciting aspect of LLMs is their knackfor few-shot and zero-shot learning: they can learn to perform a task with veryfew examples. Few-shotting has particular synergies in software engineering,where there are a lot of phenomena (identifier names, APIs, terminology, codingpatterns) that are known to be highly project-specific. However,project-specific data can be quite limited, especially early in the history ofa project; thus the few-shot learning capacity of LLMs might be very relevant.In this paper, we investigate the use few-shot training with the very large GPT(Generative Pre-trained Transformer) Codex model, and find evidence suggestingthat one can significantly surpass state-of-the-art models forcode-summarization, leveraging project-specific training.",,arXiv,"['cs.se', 'cs.lg']",, convolutional bypasses are better vision transformer adapters,"['Shibo Jie', 'Zhi-Hong Deng']",http://arxiv.org/pdf/2207.07039v3.pdf,2022-07-14,," The pretrain-then-finetune paradigm has been widely adopted in computervision. But as the size of Vision Transformer (ViT) grows exponentially, thefull finetuning becomes prohibitive in view of the heavier storage overhead.Motivated by parameter-efficient transfer learning (PETL) on languagetransformers, recent studies attempt to insert lightweight adaptation modules(e.g., adapter layers or prompt tokens) to pretrained ViT and only finetunethese modules while the pretrained weights are frozen. However, these moduleswere originally proposed to finetune language models and did not take intoaccount the prior knowledge specifically for visual tasks. In this paper, wepropose to construct Convolutional Bypasses (Convpass) in ViT as adaptationmodules, introducing only a small amount (less than 0.5% of model parameters)of trainable parameters to adapt the large ViT. Different from other PETLmethods, Convpass benefits from the hard-coded inductive bias of convolutionallayers and thus is more suitable for visual tasks, especially in the low-dataregime. Experimental results on VTAB-1K benchmark and few-shot learningdatasets show that Convpass outperforms current language-oriented adaptationmodules, demonstrating the necessity to tailor vision-oriented adaptationmodules for adapting vision models.",,arXiv,['cs.cv'],, selfsupervision can be a good fewshot learner,"['Yuning Lu', 'Liangjian Wen', 'Jianzhuang Liu', 'Yajing Liu', 'Xinmei Tian']",http://arxiv.org/pdf/2207.09176v1.pdf,2022-07-19,," Existing few-shot learning (FSL) methods rely on training with a largelabeled dataset, which prevents them from leveraging abundant unlabeled data.From an information-theoretic perspective, we propose an effective unsupervisedFSL method, learning representations with self-supervision. Following theInfoMax principle, our method learns comprehensive representations by capturingthe intrinsic structure of the data. Specifically, we maximize the mutualinformation (MI) of instances and their representations with a low-bias MIestimator to perform self-supervised pre-training. Rather than supervisedpre-training focusing on the discriminable features of the seen classes, ourself-supervised model has less bias toward the seen classes, resulting inbetter generalization for unseen classes. We explain that supervisedpre-training and self-supervised pre-training are actually maximizing differentMI objectives. Extensive experiments are further conducted to analyze their FSLperformance with various training settings. Surprisingly, the results show thatself-supervised pre-training can outperform supervised pre-training under theappropriate conditions. Compared with state-of-the-art FSL methods, ourapproach achieves comparable performance on widely used FSL benchmarks withoutany labels of the base classes.",,arXiv,['cs.cv'],, language model cascades,"['David Dohan', 'Winnie Xu', 'Aitor Lewkowycz', 'Jacob Austin', 'David Bieber', 'Raphael Gontijo Lopes', 'Yuhuai Wu', 'Henryk Michalewski', 'Rif A. Saurous', 'Jascha Sohl-dickstein', 'Kevin Murphy', 'Charles Sutton']",http://arxiv.org/pdf/2207.10342v2.pdf,2022-07-21,," Prompted models have demonstrated impressive few-shot learning abilities.Repeated interactions at test-time with a single model, or the composition ofmultiple models together, further expands capabilities. These compositions areprobabilistic models, and may be expressed in the language of graphical modelswith random variables whose values are complex data types such as strings.Cases with control flow and dynamic structure require techniques fromprobabilistic programming, which allow implementing disparate model structuresand inference strategies in a unified language. We formalize several existingtechniques from this perspective, including scratchpads / chain of thought,verifiers, STaR, selection-inference, and tool use. We refer to the resultingprograms as language model cascades.",,arXiv,"['cs.cl', 'cs.ai']",, fewshot adaptation works with unpredictable data,"['Jun Shern Chan', 'Michael Pieler', 'Jonathan Jao', 'Jérémy Scheurer', 'Ethan Perez']",http://arxiv.org/pdf/2208.01009v2.pdf,2022-08-01,," Prior work on language models (LMs) shows that training on a large number ofdiverse tasks improves few-shot learning (FSL) performance on new tasks. Wetake this to the extreme, automatically extracting 413,299 tasks from internettables - orders of magnitude more than the next-largest public datasets.Finetuning on the resulting dataset leads to improved FSL performance onNatural Language Processing (NLP) tasks, but not proportionally to datasetscale. In fact, we find that narrow subsets of our dataset sometimes outperformmore diverse datasets. For example, finetuning on software documentation fromsupport.google.com raises FSL performance by a mean of +7.5% on 52 downstreamtasks, which beats training on 40 human-curated NLP datasets (+6.7%).Finetuning on various narrow datasets leads to similar broad improvementsacross test tasks, suggesting that the gains are not from domain adaptation butadapting to FSL in general. We do not observe clear patterns between thedatasets that lead to FSL gains, leaving open questions about why certain datahelps with FSL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, robotic interestingness via humaninformed fewshot object detection,"['Seungchan Kim', 'Chen Wang', 'Bowen Li', 'Sebastian Scherer']",http://arxiv.org/pdf/2208.01084v1.pdf,2022-08-01,," Interestingness recognition is crucial for decision making in autonomousexploration for mobile robots. Previous methods proposed an unsupervised onlinelearning approach that can adapt to environments and detect interesting scenesquickly, but lack the ability to adapt to human-informed interesting objects.To solve this problem, we introduce a human-interactive framework,AirInteraction, that can detect human-informed objects via few-shot onlinelearning. To reduce the communication bandwidth, we first apply an onlineunsupervised learning algorithm on the unmanned vehicle for interestingnessrecognition and then only send the potential interesting scenes to abase-station for human inspection. The human operator is able to draw andprovide bounding box annotations for particular interesting objects, which aresent back to the robot to detect similar objects via few-shot learning. Onlyusing few human-labeled examples, the robot can learn novel interesting objectcategories during the mission and detect interesting scenes that contain theobjects. We evaluate our method on various interesting scene recognitiondatasets. To the best of our knowledge, it is the first human-informed few-shotobject detection framework for autonomous exploration.",,arXiv,['cs.ro'],, atlas fewshot learning with retrieval augmented language models,"['Gautier Izacard', 'Patrick Lewis', 'Maria Lomeli', 'Lucas Hosseini', 'Fabio Petroni', 'Timo Schick', 'Jane Dwivedi-Yu', 'Armand Joulin', 'Sebastian Riedel', 'Edouard Grave']",http://arxiv.org/pdf/2208.03299v3.pdf,2022-08-05,," Large language models have shown impressive few-shot results on a wide rangeof tasks. However, when knowledge is key for such results, as is the case fortasks such as question answering and fact checking, massive parameter counts tostore knowledge seem to be needed. Retrieval augmented models are known toexcel at knowledge intensive tasks without the need for as many parameters, butit is unclear whether they work in few-shot settings. In this work we presentAtlas, a carefully designed and pre-trained retrieval augmented language modelable to learn knowledge intensive tasks with very few training examples. Weperform evaluations on a wide range of tasks, including MMLU, KILT andNaturalQuestions, and study the impact of the content of the document index,showing that it can easily be updated. Notably, Atlas reaches over 42% accuracyon Natural Questions using only 64 examples, outperforming a 540B parametersmodel by 3% despite having 50x fewer parameters.",,arXiv,['cs.cl'],, limits of an ai program for solving college math problems,['Ernest Davis'],http://arxiv.org/pdf/2208.06906v1.pdf,2022-08-14,," Drori et al. (2022) report that ""A neural network solves, explains, andgenerates university math problems by program synthesis and few-shot learningat human level ... [It] automatically answers 81\% of university-levelmathematics problems."" The system they describe is indeed impressive; however,the above description is very much overstated. The work of solving the problemsis done, not by a neural network, but by the symbolic algebra package Sympy.Problems of various formats are excluded from consideration. The so-called""explanations"" are just rewordings of lines of code. Answers are marked ascorrect that are not in the form specified in the problem. Most seriously, itseems that in many cases the system uses the correct answer given in the testcorpus to guide its path to solving the problem.",,arXiv,['cs.ai'],, efficient fewshot learning without prompts,"['Lewis Tunstall', 'Nils Reimers', 'Unso Eun Seo Jo', 'Luke Bates', 'Daniel Korat', 'Moshe Wasserblat', 'Oren Pereg']",http://arxiv.org/pdf/2209.11055v1.pdf,2022-09-22,," Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) andpattern exploiting training (PET), have achieved impressive results inlabel-scarce settings. However, they are difficult to employ since they aresubject to high variability from manually crafted prompts, and typicallyrequire billion-parameter language models to achieve high accuracy. To addressthese shortcomings, we propose SetFit (Sentence Transformer Fine-tuning), anefficient and prompt-free framework for few-shot fine-tuning of SentenceTransformers (ST). SetFit works by first fine-tuning a pretrained ST on a smallnumber of text pairs, in a contrastive Siamese manner. The resulting model isthen used to generate rich text embeddings, which are used to train aclassification head. This simple framework requires no prompts or verbalizers,and achieves high accuracy with orders of magnitude less parameters thanexisting techniques. Our experiments show that SetFit obtains comparableresults with PEFT and PET techniques, while being an order of magnitude fasterto train. We also show that SetFit can be applied in multilingual settings bysimply switching the ST body. Our code is available athttps://github.com/huggingface/setfit and our datasets athttps://huggingface.co/setfit .",,arXiv,['cs.cl'],, core a retrievethenedit framework for counterfactual data generation,"['Tanay Dixit', 'Bhargavi Paranjape', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2210.04873v2.pdf,2022-10-10,," Counterfactual data augmentation (CDA) -- i.e., adding minimally perturbedinputs during training -- helps reduce model reliance on spurious correlationsand improves generalization to out-of-distribution (OOD) data. Prior work ongenerating counterfactuals only considered restricted classes of perturbations,limiting their effectiveness. We present COunterfactual Generation viaRetrieval and Editing (CORE), a retrieval-augmented generation framework forcreating diverse counterfactual perturbations for CDA. For each trainingexample, CORE first performs a dense retrieval over a task-related unlabeledtext corpus using a learned bi-encoder and extracts relevant counterfactualexcerpts. CORE then incorporates these into prompts to a large language modelwith few-shot learning capabilities, for counterfactual editing. Conditioninglanguage model edits on naturally occurring data results in diverseperturbations. Experiments on natural language inference and sentiment analysisbenchmarks show that CORE counterfactuals are more effective at improvinggeneralization to OOD data compared to other DA approaches. We also show thatthe CORE retrieval framework can be used to encourage diversity in manuallyauthored perturbations",,arXiv,['cs.cl'],, continual training of language models for fewshot learning,"['Zixuan Ke', 'Haowei Lin', 'Yijia Shao', 'Hu Xu', 'Lei Shu', 'Bing Liu']",http://arxiv.org/pdf/2210.05549v1.pdf,2022-10-11,," Recent work on applying large language models (LMs) achieves impressiveperformance in many NLP applications. Adapting or posttraining an LM using anunlabeled domain corpus can produce even better performance for end-tasks inthe domain. This paper proposes the problem of continually extending an LM byincrementally post-train the LM with a sequence of unlabeled domain corpora toexpand its knowledge without forgetting its previous skills. The goal is toimprove the few-shot end-task learning in these domains. The resulting systemis called CPT (Continual PostTraining), which to our knowledge, is the firstcontinual post-training system. Experimental results verify its effectiveness.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",, knowledgegrounded dialog state tracking,"['Dian Yu', 'Mingqiu Wang', 'Yuan Cao', 'Izhak Shafran', 'Laurent El Shafey', 'Hagen Soltau']",http://arxiv.org/pdf/2210.06656v1.pdf,2022-10-13,," Knowledge (including structured knowledge such as schema and ontology, andunstructured knowledge such as web corpus) is a critical part of dialogunderstanding, especially for unseen tasks and domains. Traditionally, suchdomain-specific knowledge is encoded implicitly into model parameters for theexecution of downstream tasks, which makes training inefficient. In addition,such models are not easily transferable to new tasks with different schemas. Inthis work, we propose to perform dialog state tracking grounded on knowledgeencoded externally. We query relevant knowledge of various forms based on thedialog context where such information can ground the prediction of dialogstates. We demonstrate superior performance of our proposed method over strongbaselines, especially in the few-shot learning setting.",,arXiv,['cs.cl'],, "visionlanguage pretraining basics, recent advances, and future trends","['Zhe Gan', 'Linjie Li', 'Chunyuan Li', 'Lijuan Wang', 'Zicheng Liu', 'Jianfeng Gao']",http://arxiv.org/pdf/2210.09263v1.pdf,2022-10-17,," This paper surveys vision-language pre-training (VLP) methods for multimodalintelligence that have been developed in the last few years. We group theseapproaches into three categories: ($i$) VLP for image-text tasks, such as imagecaptioning, image-text retrieval, visual question answering, and visualgrounding; ($ii$) VLP for core computer vision tasks, such as (open-set) imageclassification, object detection, and segmentation; and ($iii$) VLP forvideo-text tasks, such as video captioning, video-text retrieval, and videoquestion answering. For each category, we present a comprehensive review ofstate-of-the-art methods, and discuss the progress that has been made andchallenges still being faced, using specific systems and models as casestudies. In addition, for each category, we discuss advanced topics beingactively explored in the research community, such as big foundation models,unified modeling, in-context few-shot learning, knowledge, robustness, andcomputer vision in the wild, to name a few.",,arXiv,"['cs.cv', 'cs.cl']",, better fewshot relation extraction with label prompt dropout,"['Peiyuan Zhang', 'Wei Lu']",http://arxiv.org/pdf/2210.13733v1.pdf,2022-10-25,," Few-shot relation extraction aims to learn to identify the relation betweentwo entities based on very limited training examples. Recent efforts found thattextual labels (i.e., relation names and relation descriptions) could beextremely useful for learning class representations, which will benefit thefew-shot learning task. However, what is the best way to leverage such labelinformation in the learning process is an important research question. Existingworks largely assume such textual labels are always present during bothlearning and prediction. In this work, we argue that such approaches may notalways lead to optimal results. Instead, we present a novel approach calledlabel prompt dropout, which randomly removes label descriptions in the learningprocess. Our experiments show that our approach is able to lead to improvedclass representations, yielding significantly better results on the few-shotrelation extraction task.",,arXiv,['cs.cl'],, stprompt semanticguided and taskdriven prompts for effective fewshot classification,"['Jinta Weng', 'Yue Hu', 'Jing Qiu', 'Heyan Huan']",http://arxiv.org/pdf/2210.16489v1.pdf,2022-10-29,," The effectiveness of prompt learning has been demonstrated in differentpre-trained language models. By formulating suitable template and choosingrepresentative label mapping, prompt learning can be used as an efficientknowledge probe. However, finding suitable prompt in existing methods requiresmultiple experimental attempts or appropriate vector initialization onformulating suitable template and choosing representative label mapping, whichit is more common in few-shot learning tasks. Motivating by PLM workingprocess, we try to construct the prompt from task semantic perspective and thuspropose the STPrompt -Semantic-guided and Task-driven Prompt model.Specifically, two novel prompts generated from the semantic dependency tree(Dep-prompt) and task-specific metadata description (Meta-prompt), are firstlyconstructed in a prompt augmented pool, and the proposed model wouldautomatically select a suitable semantic prompt to motivating the promptlearning process. Our results show that the proposed model achieves thestate-of-the-art performance in five different datasets of few-shot textclassification tasks, which prove that more semantic and significant promptscould assume as a better knowledge proving tool.",,arXiv,"['cs.cl', 'cs.ai']",, retrievalaugmented generative question answering for event argument extraction,"['Xinya Du', 'Heng Ji']",http://arxiv.org/pdf/2211.07067v1.pdf,2022-11-14,," Event argument extraction has long been studied as a sequential predictionproblem with extractive-based methods, tackling each argument in isolation.Although recent work proposes generation-based methods to capturecross-argument dependency, they require generating and post-processing acomplicated target sequence (template). Motivated by these observations andrecent pretrained language models' capabilities of learning fromdemonstrations. We propose a retrieval-augmented generative QA model (R-GQA)for event argument extraction. It retrieves the most similar QA pair andaugments it as prompt to the current example's context, then decodes thearguments as answers. Our approach outperforms substantially prior methodsacross various settings (i.e. fully supervised, domain transfer, and fewshotlearning). Finally, we propose a clustering-based sampling strategy (JointEnc)and conduct a thorough analysis of how different strategies influence thefew-shot learning performance. The implementations are available at https://github.com/xinyadu/RGQA",,arXiv,['cs.cl'],, protsi prototypical siamese network with data augmentation for fewshot subjective answer evaluation,"['Yining Lu', 'Jingxi Qiu', 'Gaurav Gupta']",http://arxiv.org/pdf/2211.09855v1.pdf,2022-11-17,," Subjective answer evaluation is a time-consuming and tedious task, and thequality of the evaluation is heavily influenced by a variety of subjectivepersonal characteristics. Instead, machine evaluation can effectively assisteducators in saving time while also ensuring that evaluations are fair andrealistic. However, most existing methods using regular machine learning andnatural language processing techniques are generally hampered by a lack ofannotated answers and poor model interpretability, making them unsuitable forreal-world use. To solve these challenges, we propose ProtSi Network, a uniquesemi-supervised architecture that for the first time uses few-shot learning tosubjective answer evaluation. To evaluate students' answers by similarityprototypes, ProtSi Network simulates the natural process of evaluator scoringanswers by combining Siamese Network which consists of BERT and encoder layerswith Prototypical Network. We employed an unsupervised diverse paraphrasingmodel ProtAugment, in order to prevent overfitting for effective few-shot textclassification. By integrating contrastive learning, the discriminative textissue can be mitigated. Experiments on the Kaggle Short Scoring Datasetdemonstrate that the ProtSi Network outperforms the most recent baseline modelsin terms of accuracy and quadratic weighted kappa.",,arXiv,['cs.cl'],, tempera testtime prompting via reinforcement learning,"['Tianjun Zhang', 'Xuezhi Wang', 'Denny Zhou', 'Dale Schuurmans', 'Joseph E. Gonzalez']",http://arxiv.org/pdf/2211.11890v1.pdf,2022-11-21,," Careful prompt design is critical to the use of large language models inzero-shot or few-shot learning. As a consequence, there is a growing interestin automated methods to design optimal prompts. In this work, we proposeTest-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast toprior prompt generation methods, TEMPERA can efficiently leverage priorknowledge, is adaptive to different queries and provides an interpretableprompt for every query. To achieve this, we design a novel action space thatallows flexible editing of the initial prompts covering a wide set ofcommonly-used components like instructions, few-shot exemplars, andverbalizers. The proposed method achieves significant gains compared withrecent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across avariety of tasks including sentiment analysis, topic classification, naturallanguage inference, and reading comprehension. Our method achieves 5.33x onaverage improvement in sample efficiency when compared to the traditionalfine-tuning methods.",,arXiv,"['cs.cl', 'cs.ai']",, towards practical fewshot federated nlp,"['Dongqi Cai', 'Yaozong Wu', 'Haitao Yuan', 'Shangguang Wang', 'Felix Xiaozhu Lin', 'Mengwei Xu']",http://arxiv.org/pdf/2212.00192v2.pdf,2022-12-01,," Transformer-based pre-trained models have emerged as the predominant solutionfor natural language processing (NLP). Fine-tuning such pre-trained models fordownstream tasks often requires a considerable amount of labeled private data.In practice, private data is often distributed across heterogeneous mobiledevices and may be prohibited from being uploaded. Moreover, well-curatedlabeled data is often scarce, presenting an additional challenge. To addressthese challenges, we first introduce a data generator for federated few-shotlearning tasks, which encompasses the quantity and skewness of scarce labeleddata in a realistic setting. Subsequently, we propose AUG-FedPrompt, aprompt-based federated learning system that exploits abundant unlabeled datafor data augmentation. Our experiments indicate that AUG-FedPrompt can performon par with full-set fine-tuning with a limited amount of labeled data.However, such competitive performance comes at a significant system cost.",,arXiv,"['cs.cl', 'cs.lg']",, fewshot nested named entity recognition,"['Hong Ming', 'Jiaoyun Yang', 'Lili Jiang', 'Yan Pan', 'Ning An']",http://arxiv.org/pdf/2212.00953v1.pdf,2022-12-02,," While Named Entity Recognition (NER) is a widely studied task, makinginferences of entities with only a few labeled data has been challenging,especially for entities with nested structures. Unlike flat entities, entitiesand their nested entities are more likely to have similar semantic featurerepresentations, drastically increasing difficulties in classifying differententity categories in the few-shot setting. Although prior work has brieflydiscussed nested structures in the context of few-shot learning, to our bestknowledge, this paper is the first one specifically dedicated to studying thefew-shot nested NER task. Leveraging contextual dependency to distinguishnested entities, we propose a Biaffine-based Contrastive Learning (BCL)framework. We first design a Biaffine span representation module for learningthe contextual span dependency representation for each entity span rather thanonly learning its semantic representation. We then merge these tworepresentations by the residual connection to distinguish nested entities.Finally, we build a contrastive learning framework to adjust the representationdistribution for larger margin boundaries and more generalized domain transferlearning ability. We conducted experimental studies on three English, German,and Russian nested NER datasets. The results show that the BCL outperformedthree baseline models on the 1-shot and 5-shot tasks in terms of F1 score.",,arXiv,"['cs.cl', 'cs.ai']",, improving fewshot performance of language models via nearest neighbor calibration,"['Feng Nie', 'Meixi Chen', 'Zhirui Zhang', 'Xu Cheng']",http://arxiv.org/pdf/2212.02216v1.pdf,2022-12-05,," Pre-trained language models (PLMs) have exhibited remarkable few-shotlearning capabilities when provided a few examples in a natural language promptas demonstrations of test instances, i.e., in-context learning. However, theperformance of in-context learning is susceptible to the choice of promptformat, training examples and the ordering of the training examples. In thispaper, we propose a novel nearest-neighbor calibration framework for in-contextlearning to ease this issue. It is inspired by a phenomenon that the in-contextlearning paradigm produces incorrect labels when inferring training instances,which provides a useful supervised signal to calibrate predictions. Thus, ourmethod directly augments the predictions with a $k$-nearest-neighbor ($k$NN)classifier over a datastore of cached few-shot instance representationsobtained by PLMs and their corresponding labels. Then adaptive neighborselection and feature regularization modules are introduced to make full use ofa few support instances to reduce the $k$NN retrieval noise. Experiments onvarious few-shot text classification tasks demonstrate that our methodsignificantly improves in-context learning, while even achieving comparableperformance with state-of-the-art tuning-based approaches in some sentimentanalysis tasks.",,arXiv,['cs.cl'],, jampatoisnli a jamaican patois natural language inference dataset,"['Ruth-Ann Armstrong', 'John Hewitt', 'Christopher Manning']",http://arxiv.org/pdf/2212.03419v1.pdf,2022-12-07,," JamPatoisNLI provides the first dataset for natural language inference in acreole language, Jamaican Patois. Many of the most-spoken low-resourcelanguages are creoles. These languages commonly have a lexicon derived from amajor world language and a distinctive grammar reflecting the languages of theoriginal speakers and the process of language birth by creolization. This givesthem a distinctive place in exploring the effectiveness of transfer from largemonolingual or multilingual pretrained models. While our work, along withprevious work, shows that transfer from these models to low-resource languagesthat are unrelated to languages in their training set is not very effective, wewould expect stronger results from transfer to creoles. Indeed, our experimentsshow considerably better results from few-shot learning of JamPatoisNLI thanfor such unrelated languages, and help us begin to understand how the uniquerelationship between creoles and their high-resource base languages affectcross-lingual transfer. JamPatoisNLI, which consists of naturally-occurringpremises and expert-written hypotheses, is a step towards steering researchinto a traditionally underserved language and a useful benchmark forunderstanding cross-lingual NLP.",,arXiv,"['cs.cl', 'cs.lg', 'i.2.7']",, learn to explore on bootstrapping interactive data exploration with metalearning,"['Yukun Cao', 'Xike Xie', 'Kexin Huang']",http://arxiv.org/pdf/2212.03423v4.pdf,2022-12-07,," Interactive data exploration (IDE) is an effective way of comprehending bigdata, whose volume and complexity are beyond human abilities. The main goal ofIDE is to discover user interest regions from a database through multi-roundsof user labelling. Existing IDEs adopt active-learning framework, where usersiteratively discriminate or label the interestingness of selected tuples. Theprocess of data exploration can be viewed as the process of training aclassifier, which determines whether a database tuple is interesting to a user.An efficient exploration thus takes very few iterations of user labelling toreach the data region of interest. In this work, we consider the dataexploration as the process of few-shot learning, where the classifier islearned with only a few training examples, or exploration iterations. To thisend, we propose a learning-to-explore framework, based on meta-learning, whichlearns how to learn a classifier with automatically generated meta-tasks, sothat the exploration process can be much shortened. Extensive experiments onreal datasets show that our proposal outperforms existing explore-by-examplesolutions in terms of accuracy and efficiency.",,arXiv,"['cs.db', 'cs.ai']",, demystifying prompts in language models via perplexity estimation,"['Hila Gonen', 'Srini Iyer', 'Terra Blevins', 'Noah A. Smith', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2212.04037v1.pdf,2022-12-08,," Language models can be prompted to perform a wide variety of zero- andfew-shot learning problems. However, performance varies significantly with thechoice of prompt, and we do not yet understand why this happens or how to pickthe best prompts. In this work, we analyze the factors that contribute to thisvariance and establish a new empirical hypothesis: the performance of a promptis coupled with the extent to which the model is familiar with the language itcontains. Over a wide range of tasks, we show that the lower the perplexity ofthe prompt is, the better the prompt is able to perform the task. As a result,we devise a method for creating prompts: (1) automatically extend a small seedset of manually written prompts by paraphrasing using GPT3 and backtranslationand (2) choose the lowest perplexity prompts to get significant gains inperformance.",,arXiv,['cs.cl'],, localized latent updates for finetuning visionlanguage models,"['Moritz Ibing', 'Isaak Lim', 'Leif Kobbelt']",http://arxiv.org/pdf/2212.06556v1.pdf,2022-12-13,," Although massive pre-trained vision-language models like CLIP show impressivegeneralization capabilities for many tasks, still it often remains necessary tofine-tune them for improved performance on specific datasets. When doing so, itis desirable that updating the model is fast and that the model does not loseits capabilities on data outside of the dataset, as is often the case withclassical fine-tuning approaches. In this work we suggest a lightweightadapter, that only updates the models predictions close to seen datapoints. Wedemonstrate the effectiveness and speed of this relatively simple approach inthe context of few-shot learning, where our results both on classes seen andunseen during training are comparable with or improve on the state of the art.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, alert adapting language models to reasoning tasks,"['Ping Yu', 'Tianlu Wang', 'Olga Golovneva', 'Badr AlKhamissi', 'Siddharth Verma', 'Zhijing Jin', 'Gargi Ghosh', 'Mona Diab', 'Asli Celikyilmaz']",http://arxiv.org/pdf/2212.08286v2.pdf,2022-12-16,," Current large language models can perform reasonably well on complex tasksthat require step-by-step reasoning with few-shot learning. Are these modelsapplying reasoning skills they have learnt during pre-training and reasonoutside of their training context, or are they simply memorizing their trainingcorpus at finer granularity and have learnt to better understand their context?To tease apart these possibilities, we introduce ALERT, a benchmark and suiteof analyses for assessing language models' reasoning ability comparingpre-trained and finetuned models on complex tasks that require reasoning skillsto solve. ALERT provides a test bed to asses any language model on fine-grainedreasoning skills, which spans over 20 datasets and covers 10 differentreasoning skills. We leverage ALERT to further investigate the role offinetuning. With extensive empirical analysis we find that language modelslearn more reasoning skills such as textual entailment, abductive reasoning,and analogical reasoning during finetuning stage compared to pretraining state.We also find that when language models are finetuned they tend to overfit tothe prompt template, which hurts the robustness of models causinggeneralization problems.",,arXiv,['cs.cl'],, learning from taxonomy multilabel fewshot classification for everyday sound recognition,"['Jinhua Liang', 'Huy Phan', 'Emmanouil Benetos']",http://arxiv.org/pdf/2212.08952v1.pdf,2022-12-17,," Everyday sound recognition aims to infer types of sound events in audiostreams. While many works succeeded in training models with high performance ina fully-supervised manner, they are still restricted to the demand of largequantities of labelled data and the range of predefined classes. To overcomethese drawbacks, this work firstly curates a new database named FSD-FS formulti-label few-shot audio classification. It then explores how to incorporateaudio taxonomy in few-shot learning. Specifically, this work proposeslabel-dependent prototypical networks (LaD-protonet) to exploit parent-childrenrelationships between labels. Plus, it applies taxonomy-aware label smoothingtechniques to boost model performance. Experiments demonstrate thatLaD-protonet outperforms original prototypical networks as well as otherstate-of-the-art methods. Moreover, its performance can be further boosted whencombined with taxonomy-aware label smoothing.",,arXiv,"['cs.sd', 'eess.as']",, a survey on fewshot knowledge graph completion with structural and commonsense knowledge,"['Haodi Ma', 'Daisy Zhe Wang']",http://arxiv.org/pdf/2301.01172v1.pdf,2023-01-03,," Knowledge graphs (KG) have served as the key component of various naturallanguage processing applications. Commonsense knowledge graphs (CKG) are aspecial type of KG, where entities and relations are composed of free-formtext. However, previous works in KG completion and CKG completion suffer fromlong-tail relations and newly-added relations which do not have many knowtriples for training. In light of this, few-shot KG completion (FKGC), whichrequires the strengths of graph representation learning and few-shot learning,has been proposed to challenge the problem of limited annotated data. In thispaper, we comprehensively survey previous attempts on such tasks in the form ofa series of methods and applications. Specifically, we first introduce FKGCchallenges, commonly used KGs, and CKGs. Then we systematically categorize andsummarize existing works in terms of the type of KGs and the methods. Finally,we present applications of FKGC models on prediction tasks in different areasand share our thoughts on future research directions of FKGC.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, learning to initialize can meta learning improve crosstask generalization in prompt tuning,"['Chengwei Qin', 'Qian Li', 'Ruochen Zhao', 'Shafiq Joty']",http://arxiv.org/pdf/2302.08143v3.pdf,2023-02-16,," Prompt tuning (PT) which only tunes the embeddings of an additional sequenceof tokens per task, keeping the pre-trained language model (PLM) frozen, hasshown remarkable performance in few-shot learning. Despite this, PT has beenshown to rely heavily on good initialization of the prompt embeddings. In thiswork, we study meta prompt tuning (MPT) to systematically explore howmeta-learning can help improve (if it can) cross-task generalization in PTthrough learning to initialize the prompt embeddings from other relevant tasks.We empirically analyze a representative set of meta learning algorithms in awide range of adaptation settings with different source/target taskconfigurations on a large set of few-shot tasks. With extensive experiments andanalysis, we demonstrate the effectiveness of MPT. We find the improvement tobe significant particularly on classification tasks. For other kinds of taskssuch as question answering, we observe that while MPT can outperform PT in mostcases, it does not always outperform multi-task learning. We further provide anin-depth analysis from the perspective of task similarity.",,arXiv,"['cs.cl', 'cs.ai']",, scalable prompt generation for semisupervised learning with language models,"['Yuhang Zhou', 'Suraj Maharjan', 'Beiye Liu']",http://arxiv.org/pdf/2302.09236v1.pdf,2023-02-18,," Prompt-based learning methods in semi-supervised learning (SSL) settings havebeen shown to be effective on multiple natural language understanding (NLU)datasets and tasks in the literature. However, manually designing multipleprompts and verbalizers requires domain knowledge and human effort, making itdifficult and expensive to scale across different datasets. In this paper, wepropose two methods to automatically design multiple prompts and integrateautomatic verbalizer in SSL settings without sacrificing performance. The firstmethod uses various demonstration examples with learnable continuous prompttokens to create diverse prompt models. The second method uses a varying numberof soft prompt tokens to encourage language models to learn different prompts.For the verbalizer, we use the prototypical verbalizer to replace the manualone. In summary, we obtained the best average accuracy of 73.2% (a relativeimprovement of 2.52% over even the previous state-of-the-art SSL method withmanual prompts and verbalizers) in different few-shot learning settings.",,arXiv,"['cs.cl', 'cs.ai']",, language models are fewshot learners for prognostic prediction,"['Zekai Chen', 'Mariann Micsinai Balan', 'Kevin Brown']",http://arxiv.org/pdf/2302.12692v4.pdf,2023-02-24,," Clinical prediction is an essential task in the healthcare industry. However,the recent success of transformers, on which large language models are built,has not been extended to this domain. In this research, we explore the use oftransformers and language models in prognostic prediction for immunotherapyusing real-world patients' clinical data and molecular profiles. This paperinvestigates the potential of transformers to improve clinical predictioncompared to conventional machine learning approaches and addresses thechallenge of few-shot learning in predicting rare disease areas. The studybenchmarks the efficacy of baselines and language models on prognosticprediction across multiple cancer types and investigates the impact ofdifferent pretrained language models under few-shot regimes. The resultsdemonstrate significant improvements in accuracy and highlight the potential ofNLP in clinical research to improve early detection and intervention fordifferent diseases.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",, prefinetuning for fewshot emotional speech recognition,"['Maximillian Chen', 'Zhou Yu']",http://arxiv.org/pdf/2302.12921v2.pdf,2023-02-24,," Speech models have long been known to overfit individual speakers for manyclassification tasks. This leads to poor generalization in settings where thespeakers are out-of-domain or out-of-distribution, as is common in productionenvironments. We view speaker adaptation as a few-shot learning problem andpropose investigating transfer learning approaches inspired by recent successwith pre-trained models in natural language tasks. We propose pre-finetuningspeech models on difficult tasks to distill knowledge into few-shot downstreamclassification objectives. We pre-finetune Wav2Vec2.0 on every permutation offour multiclass emotional speech recognition corpora and evaluate ourpre-finetuned models through 33,600 few-shot fine-tuning trials on theEmotional Speech Dataset.",,arXiv,"['cs.cl', 'cs.lg', 'cs.sd', 'eess.as']",, mixture of soft prompts for controllable data generation,"['Derek Chen', 'Celine Lee', 'Yunan Lu', 'Domenic Rosati', 'Zhou Yu']",http://arxiv.org/pdf/2303.01580v2.pdf,2023-03-02,," Large language models (LLMs) effectively generate fluent text when the targetoutput follows natural language patterns. However, structured prediction tasksconfine the output format to a limited ontology, causing even very large modelsto struggle since they were never trained with such restrictions in mind. Thedifficulty of using LLMs for direct prediction is exacerbated in few-shotlearning scenarios, which commonly arise due to domain shift and resourcelimitations. We flip the problem on its head by leveraging the LLM as a toolfor data augmentation rather than direct prediction. Our proposed Mixture ofSoft Prompts (MSP) serves as a parameter-efficient procedure for generatingdata in a controlled manner. Denoising mechanisms are further applied toimprove the quality of synthesized data. Automatic metrics show our method iscapable of producing diverse and natural text, while preserving labelsemantics. Moreover, MSP achieves state-of-the-art results on three benchmarkswhen compared against strong baselines. Our method offers an alternatedata-centric approach for applying LLMs to complex prediction tasks.",,arXiv,['cs.cl'],, enhancing activity prediction models in drug discovery with the ability to understand human language,"['Philipp Seidl', 'Andreu Vall', 'Sepp Hochreiter', 'Günter Klambauer']",http://arxiv.org/pdf/2303.03363v2.pdf,2023-03-06,," Activity and property prediction models are the central workhorses in drugdiscovery and materials sciences, but currently they have to be trained orfine-tuned for new tasks. Without training or fine-tuning, scientific languagemodels could be used for such low-data tasks through their announced zero- andfew-shot capabilities. However, their predictive quality at activity predictionis lacking. In this work, we envision a novel type of activity prediction modelthat is able to adapt to new prediction tasks at inference time, viaunderstanding textual information describing the task. To this end, we proposea new architecture with separate modules for chemical and natural languageinputs, and a contrastive pre-training objective on data from large biochemicaldatabases. In extensive experiments, we show that our method CLAMP yieldsimproved predictive performance on few-shot learning benchmarks and zero-shotproblems in drug discovery. We attribute the advances of our method to themodularized architecture and to our pre-training objective.",,arXiv,"['q-bio.bm', 'cs.cl', 'cs.lg', 'stat.ml']",, menucraft interactive menu system design with large language models,"['Amir Hossein Kargaran', 'Nafiseh Nikeghbal', 'Abbas Heydarnoori', 'Hinrich Schütze']",http://arxiv.org/pdf/2303.04496v2.pdf,2023-03-08,," Menu system design is a challenging task involving many design options andvarious human factors. For example, one crucial factor that designers need toconsider is the semantic and systematic relation of menu commands. However,capturing these relations can be challenging due to limited availableresources. With the advancement of neural language models, large languagemodels can utilize their vast pre-existing knowledge in designing and refiningmenu systems. In this paper, we propose MenuCraft, an AI-assisted designer formenu design that enables collaboration between the designer and a dialoguesystem to design menus. MenuCraft offers an interactive language-based menudesign tool that simplifies the menu design process and enables easycustomization of design options. MenuCraft supports a variety of interactionsthrough dialog that allows performing zero/few-shot learning.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, consistency analysis of chatgpt,"['Myeongjun Erik Jang', 'Thomas Lukasiewicz']",http://arxiv.org/pdf/2303.06273v3.pdf,2023-03-11,," ChatGPT has gained a huge popularity since its introduction. Its positiveaspects have been reported through many media platforms, and some analyses evenshowed that ChatGPT achieved a decent grade in professional exams, adding extrasupport to the claim that AI can now assist and even replace humans inindustrial fields. Others, however, doubt its reliability and trustworthiness.This paper investigates the trustworthiness of ChatGPT and GPT-4 regardinglogically consistent behaviour, focusing specifically on semantic consistencyand the properties of negation, symmetric, and transitive consistency. Ourfindings suggest that while both models appear to show an enhanced languageunderstanding and reasoning ability, they still frequently fall short ofgenerating logically consistent predictions. We also ascertain via experimentsthat prompt designing, few-shot learning and employing larger large languagemodels (LLMs) are unlikely to be the ultimate solution to resolve theinconsistency issue of LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, learning expressive prompting with residuals for vision transformers,"['Rajshekhar Das', 'Yonatan Dukler', 'Avinash Ravichandran', 'Ashwin Swaminathan']",http://arxiv.org/pdf/2303.15591v1.pdf,2023-03-27,," Prompt learning is an efficient approach to adapt transformers by insertinglearnable set of parameters into the input and intermediate representations ofa pre-trained model. In this work, we present Expressive Prompts with Residuals(EXPRES) which modifies the prompt learning paradigm specifically for effectiveadaptation of vision transformers (ViT). Out method constructs downstreamrepresentations via learnable ``output'' tokens, that are akin to the learnedclass tokens of the ViT. Further for better steering of the downstreamrepresentation processed by the frozen transformer, we introduce residuallearnable tokens that are added to the output of various computations. We applyEXPRES for image classification, few shot learning, and semantic segmentation,and show our method is capable of achieving state of the art prompt tuning on3/3 categories of the VTAB benchmark. In addition to strong performance, weobserve that our approach is an order of magnitude more prompt efficient thanexisting visual prompting baselines. We analytically show the computationalbenefits of our approach over weight space adaptation techniques likefinetuning. Lastly we systematically corroborate the architectural design ofour method via a series of ablation experiments.",,arXiv,['cs.cv'],, not all features matter enhancing fewshot clip with adaptive prior refinement,"['Xiangyang Zhu', 'Renrui Zhang', 'Bowei He', 'Aojun Zhou', 'Dong Wang', 'Bin Zhao', 'Peng Gao']",http://arxiv.org/pdf/2304.01195v1.pdf,2023-04-03,," The popularity of Contrastive Language-Image Pre-training (CLIP) haspropelled its application to diverse downstream vision tasks. To improve itscapacity on downstream tasks, few-shot learning has become a widely-adoptedtechnique. However, existing methods either exhibit limited performance orsuffer from excessive learnable parameters. In this paper, we propose APE, anAdaptive Prior rEfinement method for CLIP's pre-trained knowledge, whichachieves superior accuracy with high computational efficiency. Via a priorrefinement module, we analyze the inter-class disparity in the downstream dataand decouple the domain-specific knowledge from the CLIP-extracted cache model.On top of that, we introduce two model variants, a training-free APE and atraining-required APE-T. We explore the trilateral affinities between the testimage, prior cache model, and textual representations, and only enable alightweight category-residual module to be trained. For the average accuracyover 11 benchmarks, both APE and APE-T attain state-of-the-art and respectivelyoutperform the second-best by +1.59% and +1.99% under 16 shots with x30 lesslearnable parameters.",,arXiv,"['cs.cv', 'cs.ai', 'cs.mm']",, sociocultural knowledge is needed for selection of shots in hate speech detection tasks,"['Antonis Maronikolakis', 'Abdullatif Köksal', 'Hinrich Schütze']",http://arxiv.org/pdf/2304.01890v4.pdf,2023-04-04,," We introduce HATELEXICON, a lexicon of slurs and targets of hate speech forthe countries of Brazil, Germany, India and Kenya, to aid training andinterpretability of models. We demonstrate how our lexicon can be used tointerpret model predictions, showing that models developed to classify extremespeech rely heavily on target words when making predictions. Further, wepropose a method to aid shot selection for training in low-resource settingsvia HATELEXICON. In few-shot learning, the selection of shots is of paramountimportance to model performance. In our work, we simulate a few-shot settingfor German and Hindi, using HASOC data for training and the MultilingualHateCheck (MHC) as a benchmark. We show that selecting shots based on ourlexicon leads to models performing better on MHC than models trained on shotssampled randomly. Thus, when given only a few training examples, using ourlexicon to select shots containing more sociocultural information leads tobetter few-shot performance.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, revisiting automated prompting are we actually doing better,"['Yulin Zhou', 'Yiren Zhao', 'Ilia Shumailov', 'Robert Mullins', 'Yarin Gal']",http://arxiv.org/pdf/2304.03609v2.pdf,2023-04-07,," Current literature demonstrates that Large Language Models (LLMs) are greatfew-shot learners, and prompting significantly increases their performance on arange of downstream tasks in a few-shot learning setting. An attempt toautomate human-led prompting followed, with some progress achieved. Inparticular, subsequent work demonstrates automation can outperform fine-tuningin certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six differentdownstream tasks and a larger range of K-shot learning settings. We find thatautomated prompting does not consistently outperform simple manual prompts. Ourwork suggests that, in addition to fine-tuning, manual prompts should be usedas a baseline in this line of research.",,arXiv,"['cs.cl', 'cs.lg']",, information extraction from documents question answering vs token classification in realworld setups,"['Laurent Lam', 'Pirashanth Ratnamogan', 'Joël Tang', 'William Vanhuffel', 'Fabien Caspani']",http://arxiv.org/pdf/2304.10994v1.pdf,2023-04-21,," Research in Document Intelligence and especially in Document Key InformationExtraction (DocKIE) has been mainly solved as Token Classification problem.Recent breakthroughs in both natural language processing (NLP) and computervision helped building document-focused pre-training methods, leveraging amultimodal understanding of the document text, layout and image modalities.However, these breakthroughs also led to the emergence of a new DocKIE subtaskof extractive document Question Answering (DocQA), as part of the MachineReading Comprehension (MRC) research field. In this work, we compare theQuestion Answering approach with the classical token classification approachfor document key information extraction. We designed experiments to benchmarkfive different experimental setups : raw performances, robustness to noisyenvironment, capacity to extract long entities, fine-tuning speed on Few-ShotLearning and finally Zero-Shot Learning. Our research showed that when dealingwith clean and relatively short entities, it is still best to use tokenclassification-based approach, while the QA approach could be a goodalternative for noisy environment or long entities use-cases.",,arXiv,['cs.cl'],, causal interventionsbased fewshot named entity recognition,"['Zhen Yang', 'Yongbin Liu', 'Chunping Ouyang']",http://arxiv.org/pdf/2305.01914v1.pdf,2023-05-03,," Few-shot named entity recognition (NER) systems aims at recognizing newclasses of entities based on a few labeled samples. A significant challenge inthe few-shot regime is prone to overfitting than the tasks with abundantsamples. The heavy overfitting in few-shot learning is mainly led by spuriouscorrelation caused by the few samples selection bias. To alleviate the problemof the spurious correlation in the few-shot NER, in this paper, we propose acausal intervention-based few-shot NER method. Based on the prototypicalnetwork, the method intervenes in the context and prototype via backdooradjustment during training. In particular, intervening in the context of theone-shot scenario is very difficult, so we intervene in the prototype viaincremental learning, which can also avoid catastrophic forgetting. Ourexperiments on different benchmarks show that our approach achieves newstate-of-the-art results (achieving up to 29% absolute improvement and 12% onaverage for all tasks).",,arXiv,['cs.cl'],, make promptbased blackbox tuning colorful boosting model generalization from three orthogonal perspectives,"['Qiushi Sun', 'Chengcheng Han', 'Nuo Chen', 'Renyu Zhu', 'Jingyang Gong', 'Xiang Li', 'Ming Gao']",http://arxiv.org/pdf/2305.08088v1.pdf,2023-05-14,," Large language models (LLMs) have shown increasing power on various naturallanguage processing (NLP) tasks. However, tuning these models for downstreamtasks usually needs exorbitant costs or is unavailable due to commercialconsiderations. Recently, black-box tuning has been proposed to address thisproblem by optimizing task-specific prompts without accessing the gradients andhidden representations. However, most existing works have yet fully exploitedthe potential of gradient-free optimization under the scenario of few-shotlearning. In this paper, we describe BBT-RGB, a suite of straightforward andcomplementary techniques for enhancing the efficiency and performance ofblack-box optimization. Specifically, our method includes three plug-and-playcomponents: (1) Two-stage derivative-free optimization strategy thatfacilitates fast convergence and mitigates overfitting; (2) Automaticverbalizer construction with its novel usage under few-shot settings; (3)Better prompt initialization policy based on instruction search andauto-selected demonstration. Extensive experiments across various tasks onnatural language understanding and inference demonstrate the effectiveness ofour method. Our codes are publicly available athttps://github.com/QiushiSun/BBT-RGB.",,arXiv,"['cs.cl', 'cs.ai']",, cplnovid contextaware promptbased learning for norm violation detection in online communities,"['Zihao He', 'Jonathan May', 'Kristina Lerman']",http://arxiv.org/pdf/2305.09846v2.pdf,2023-05-16,," Detecting norm violations in online communities is critical to maintaininghealthy and safe spaces for online discussions. Existing machine learningapproaches often struggle to adapt to the diverse rules and interpretationsacross different communities due to the inherent challenges of fine-tuningmodels for such context-specific tasks. In this paper, we introduceContext-aware Prompt-based Learning for Norm Violation Detection (CPL-NoViD), anovel method that employs prompt-based learning to detect norm violationsacross various types of rules. CPL-NoViD outperforms the baseline byincorporating context through natural language prompts and demonstratesimproved performance across different rule types. Significantly, it not onlyexcels in cross-rule-type and cross-community norm violation detection but alsoexhibits adaptability in few-shot learning scenarios. Most notably, itestablishes a new state-of-the-art in norm violation detection, surpassingexisting benchmarks. Our work highlights the potential of prompt-based learningfor context-sensitive norm violation detection and paves the way for futureresearch on more adaptable, context-aware models to better support onlinecommunity moderators.",,arXiv,"['cs.cl', 'cs.si']",, a weak supervision approach for fewshot aspect based sentiment,"['Robert Vacareanu', 'Siddharth Varia', 'Kishaloy Halder', 'Shuai Wang', 'Giovanni Paolini', 'Neha Anna John', 'Miguel Ballesteros', 'Smaranda Muresan']",http://arxiv.org/pdf/2305.11979v1.pdf,2023-05-19,," We explore how weak supervision on abundant unlabeled data can be leveragedto improve few-shot performance in aspect-based sentiment analysis (ABSA)tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and weuse it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. Wetest the resulting model on three widely used ABSA datasets, before and afterfine-tuning. Our proposed method preserves the full fine-tuning performancewhile showing significant improvements (15.84% absolute F1) in the few-shotlearning scenario for the harder tasks. In zero-shot (i.e., withoutfine-tuning), our method outperforms the previous state of the art on theaspect extraction sentiment classification (AESC) task and is, additionally,capable of performing the harder aspect sentiment triplet extraction (ASTE)task.",,arXiv,['cs.cl'],, images in language space exploring the suitability of large language models for vision & language tasks,"['Sherzod Hakimov', 'David Schlangen']",http://arxiv.org/pdf/2305.13782v1.pdf,2023-05-23,," Large language models have demonstrated robust performance on variouslanguage tasks using zero-shot or few-shot learning paradigms. While beingactively researched, multimodal models that can additionally handle images asinput have yet to catch up in size and generality with language-only models. Inthis work, we ask whether language-only models can be utilised for tasks thatrequire visual input -- but also, as we argue, often require a strong reasoningcomponent. Similar to some recent related work, we make visual informationaccessible to the language model using separate verbalisation models.Specifically, we investigate the performance of open-source, open-accesslanguage models against GPT-3 on five vision-language tasks when giventextually-encoded visual information. Our results suggest that language modelsare effective for solving vision-language tasks even with limited samples. Thisapproach also enhances the interpretability of a model's output by providing ameans of tracing the output back through the verbalised image content.",,arXiv,['cs.cl'],, improving factuality and reasoning in language models through multiagent debate,"['Yilun Du', 'Shuang Li', 'Antonio Torralba', 'Joshua B. Tenenbaum', 'Igor Mordatch']",http://arxiv.org/pdf/2305.14325v1.pdf,2023-05-23,," Large language models (LLMs) have demonstrated remarkable capabilities inlanguage generation, understanding, and few-shot learning in recent years. Anextensive body of work has explored how their performance may be furtherimproved through the tools of prompting, ranging from verification,self-consistency, or intermediate scratchpads. In this paper, we present acomplementary approach to improve language responses where multiple languagemodel instances propose and debate their individual responses and reasoningprocesses over multiple rounds to arrive at a common final answer. Our findingsindicate that this approach significantly enhances mathematical and strategicreasoning across a number of tasks. We also demonstrate that our approachimproves the factual validity of generated content, reducing fallacious answersand hallucinations that contemporary models are prone to. Our approach may bedirectly applied to existing black-box models and uses identical procedure andprompts for all tasks we investigate. Overall, our findings suggest that such""society of minds"" approach has the potential to significantly advance thecapabilities of LLMs and pave the way for further breakthroughs in languagegeneration and understanding.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",, training on thin air improve image classification with generated data,"['Yongchao Zhou', 'Hshmat Sahak', 'Jimmy Ba']",http://arxiv.org/pdf/2305.15316v1.pdf,2023-05-24,," Acquiring high-quality data for training discriminative models is a crucialyet challenging aspect of building effective predictive systems. In this paper,we present Diffusion Inversion, a simple yet effective method that leveragesthe pre-trained generative model, Stable Diffusion, to generate diverse,high-quality training data for image classification. Our approach captures theoriginal data distribution and ensures data coverage by inverting images to thelatent space of Stable Diffusion, and generates diverse novel training imagesby conditioning the generative model on noisy versions of these vectors. Weidentify three key components that allow our generated images to successfullysupplant the original dataset, leading to a 2-3x enhancement in samplecomplexity and a 6.5x decrease in sampling time. Moreover, our approachconsistently outperforms generic prompt-based steering methods and KNNretrieval baseline across a wide range of datasets. Additionally, wedemonstrate the compatibility of our approach with widely-used dataaugmentation techniques, as well as the reliability of the generated data insupporting various neural architectures and enhancing few-shot learning.",,arXiv,"['cs.cv', 'cs.lg']",, paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation,"['Kuan-Hao Huang', 'Varun Iyer', 'I-Hung Hsu', 'Anoop Kumar', 'Kai-Wei Chang', 'Aram Galstyan']",http://arxiv.org/pdf/2305.16585v1.pdf,2023-05-26,," Paraphrase generation is a long-standing task in natural language processing(NLP). Supervised paraphrase generation models, which rely on human-annotatedparaphrase pairs, are cost-inefficient and hard to scale up. On the other hand,automatically annotated paraphrase pairs (e.g., by machine back-translation),usually suffer from the lack of syntactic diversity -- the generated paraphrasesentences are very similar to the source sentences in terms of syntax. In thiswork, we present ParaAMR, a large-scale syntactically diverse paraphrasedataset created by abstract meaning representation back-translation. Ourquantitative analysis, qualitative examples, and human evaluation demonstratethat the paraphrases of ParaAMR are syntactically more diverse compared toexisting large-scale paraphrase datasets while preserving good semanticsimilarity. In addition, we show that ParaAMR can be used to improve on threeNLP tasks: learning sentence embeddings, syntactically controlled paraphrasegeneration, and data augmentation for few-shot learning. Our results thusshowcase the potential of ParaAMR for improving various NLP applications.",,arXiv,['cs.cl'],, adapting languageaudio models as fewshot audio learners,"['Jinhua Liang', 'Xubo Liu', 'Haohe Liu', 'Huy Phan', 'Emmanouil Benetos', 'Mark D. Plumbley', 'Wenwu Wang']",http://arxiv.org/pdf/2305.17719v1.pdf,2023-05-28,," We presented the Treff adapter, a training-efficient adapter for CLAP, toboost zero-shot classification performance by making use of a small set oflabelled data. Specifically, we designed CALM to retrieve the probabilitydistribution of text-audio clips over classes using a set of audio-label pairsand combined it with CLAP's zero-shot classification results. Furthermore, wedesigned a training-free version of the Treff adapter by using CALM as a cosinesimilarity measure. Experiments showed that the proposed Treff adapter iscomparable and even better than fully-supervised methods and adaptation methodsin low-shot and data-abundant scenarios. While the Treff adapter shows thatcombining large-scale pretraining and rapid learning of domain-specificknowledge is non-trivial for obtaining generic representations for few-shotlearning, it is still limited to audio classification tasks. In the future, wewill explore how to use audio-language models in diverse audio domains.",,arXiv,"['eess.as', 'cs.sd']",, deeply coupled crossmodal prompt learning,"['Xuejing Liu', 'Wei Tang', 'Jinghui Lu', 'Rui Zhao', 'Zhaojun Guo', 'Fei Tan']",http://arxiv.org/pdf/2305.17903v3.pdf,2023-05-29,," Recent advancements in multimodal foundation models (e.g., CLIP) haveexcelled in zero-shot generalization. Prompt tuning involved in the knowledgetransfer from foundation models to downstream tasks has gained significantattention recently. Existing prompt-tuning methods in cross-modal learning,however, either solely focus on language branch, or learn vision-languageinteraction in a shallow mechanism. In this context, we propose a Deeplycoupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexiblyaccommodates the interplay between vision and language with a Cross-ModalPrompt Attention (CMPA) mechanism, which enables the mutual exchange ofrespective representation through a well-connected multi-head attention moduleprogressively and strongly. We then conduct comprehensive few-shot learningexperiments on 11 image classification datasets and analyze the robustness todomain shift as well. Thorough experimental analysis evidently demonstrates thesuperb few-shot generalization and compelling domain adaption capacity of awell-executed DCP. The code can be found at https://github.com/GingL/CMPA.",,arXiv,['cs.cv'],, what does the failure to reason with respectively in zerofewshot settings tell us about language models,"['Ruixiang Cui', 'Seolhwa Lee', 'Daniel Hershcovich', 'Anders Søgaard']",http://arxiv.org/pdf/2305.19597v1.pdf,2023-05-31,," Humans can effortlessly understand the coordinate structure of sentences suchas ""Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle,respectively"". In the context of natural language inference (NLI), we examinehow language models (LMs) reason with respective readings (Gawron and Kehler,2004) from two perspectives: syntactic-semantic and commonsense-worldknowledge. We propose a controlled synthetic dataset WikiResNLI and a naturallyoccurring dataset NatResNLI to encompass various explicit and implicitrealizations of ""respectively"". We show that fine-tuned NLI models strugglewith understanding such readings without explicit supervision. While few-shotlearning is easy in the presence of explicit cues, longer training is requiredwhen the reading is evoked implicitly, leaving models to rely on common senseinferences. Furthermore, our fine-grained analysis indicates models fail togeneralize across different constructions. To conclude, we demonstrate that LMsstill lag behind humans in generalizing to the long tail of linguisticconstructions.",,arXiv,"['cs.cl', 'cs.ai']",, humanlike fewshot learning via bayesian reasoning over natural language,['Kevin Ellis'],http://arxiv.org/pdf/2306.02797v3.pdf,2023-06-05,," A core tension in models of concept learning is that the model must carefullybalance the tractability of inference against the expressivity of thehypothesis class. Humans, however, can efficiently learn a broad range ofconcepts. We introduce a model of inductive learning that seeks to behuman-like in that sense. It implements a Bayesian reasoning process where alanguage model first proposes candidate hypotheses expressed in naturallanguage, which are then re-weighed by a prior and a likelihood. By estimatingthe prior from human data, we can predict human judgments on learning problemsinvolving numbers and sets, spanning concepts that are generative,discriminative, propositional, and higher-order.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, few shot rationale generation using selftraining with dual teachers,"['Aditya Srikanth Veerubhotla', 'Lahari Poddar', 'Jun Yin', 'György Szarvas', 'Sharanya Eswaran']",http://arxiv.org/pdf/2306.03315v1.pdf,2023-06-05,," Self-rationalizing models that also generate a free-text explanation fortheir predicted labels are an important tool to build trustworthy AIapplications. Since generating explanations for annotated labels is a laboriousand costly pro cess, recent models rely on large pretrained language models(PLMs) as their backbone and few-shot learning. In this work we explore aself-training approach leveraging both labeled and unlabeled data to furtherimprove few-shot models, under the assumption that neither human writtenrationales nor annotated task labels are available at scale. We introduce anovel dual-teacher learning framework, which learns two specialized teachermodels for task prediction and rationalization using self-training and distillstheir knowledge into a multi-tasking student model that can jointly generatethe task label and rationale. Furthermore, we formulate a new loss function,Masked Label Regularization (MLR) which promotes explanations to be stronglyconditioned on predicted labels. Evaluation on three public datasetsdemonstrate that the proposed methods are effective in modeling task labels andgenerating faithful rationales.",,arXiv,"['cs.cl', 'cs.ai']",, a new dataset and empirical study for sentence simplification in chinese,"['Shiping Yang', 'Renliang Sun', 'Xiaojun Wan']",http://arxiv.org/pdf/2306.04188v1.pdf,2023-06-07,," Sentence Simplification is a valuable technique that can benefit languagelearners and children a lot. However, current research focuses more on Englishsentence simplification. The development of Chinese sentence simplification isrelatively slow due to the lack of data. To alleviate this limitation, thispaper introduces CSS, a new dataset for assessing sentence simplification inChinese. We collect manual simplifications from human annotators and performdata analysis to show the difference between English and Chinese sentencesimplifications. Furthermore, we test several unsupervised and zero/few-shotlearning methods on CSS and analyze the automatic evaluation and humanevaluation results. In the end, we explore whether Large Language Models canserve as high-quality Chinese sentence simplification systems by evaluatingthem on CSS.",,arXiv,['cs.cl'],, can ai moderate online communities,"['Henrik Axelsen', 'Johannes Rude Jensen', 'Sebastian Axelsen', 'Valdemar Licht', 'Omri Ross']",http://arxiv.org/pdf/2306.05122v1.pdf,2023-06-08,," The task of cultivating healthy communication in online communities becomesincreasingly urgent, as gaming and social media experiences becomeprogressively more immersive and life-like. We approach the challenge ofmoderating online communities by training student models using a large languagemodel (LLM). We use zero-shot learning models to distill and expand datasetsfollowed by a few-shot learning and a fine-tuning approach, leveragingopen-access generative pre-trained transformer models (GPT) from OpenAI. Ourpreliminary findings suggest, that when properly trained, LLMs can excel inidentifying actor intentions, moderating toxic comments, and rewarding positivecontributions. The student models perform above-expectation in non-contextualassignments such as identifying classically toxic behavior and performsufficiently on contextual assignments such as identifying positivecontributions to online discourse. Further, using open-access models likeOpenAI's GPT we experience a step-change in the development process for whathas historically been a complex modeling task. We contribute to the informationsystem (IS) discourse with a rapid development framework on the application ofgenerative AI in content online moderation and management of culture indecentralized, pseudonymous communities by providing a sample model suite ofindustrial-ready generative AI models based on open-access LLMs.",,arXiv,['cs.cy'],, the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues,"['Adaeze Adigwe', 'Zheng Yuan']",http://arxiv.org/pdf/2306.05360v1.pdf,2023-06-08,," This paper presents the ADAIO team's system entry in the Building EducationalApplications (BEA) 2023 Shared Task on Generating AI Teacher Responses inEducational Dialogues. The task aims to assess the performance ofstate-of-the-art generative models as AI teachers in producing suitableresponses within a student-teacher dialogue. Our system comprises evaluatingvarious baseline models using OpenAI GPT-3 and designing diverse prompts toprompt the OpenAI models for teacher response generation. After the challenge,our system achieved second place by employing a few-shot prompt-based approachwith the OpenAI text-davinci-003 model. The results highlight the few-shotlearning capabilities of large-language models, particularly OpenAI's GPT-3, inthe role of AI teachers.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, rethink the effectiveness of text data augmentation an empirical analysis,"['Zhengxiang Shi', 'Aldo Lipani']",http://arxiv.org/pdf/2306.07664v1.pdf,2023-06-13,," In recent years, language models (LMs) have made remarkable progress inadvancing the field of natural language processing (NLP). However, the impactof data augmentation (DA) techniques on the fine-tuning (FT) performance ofthese LMs has been a topic of ongoing debate. In this study, we evaluate theeffectiveness of three different FT methods in conjugation withback-translation across an array of 7 diverse NLP tasks, includingclassification and regression types, covering single-sentence and sentence-pairtasks. Contrary to prior assumptions that DA does not contribute to theenhancement of LMs' FT performance, our findings reveal that continuedpre-training on augmented data can effectively improve the FT performance ofthe downstream tasks. In the most favourable case, continued pre-trainingimproves the performance of FT by more than 10% in the few-shot learningsetting. Our finding highlights the potential of DA as a powerful tool forbolstering LMs' performance.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, neural finetuning search for fewshot learning,"['Panagiotis Eustratiadis', 'Łukasz Dudziak', 'Da Li', 'Timothy Hospedales']",http://arxiv.org/pdf/2306.09295v1.pdf,2023-06-15,," In few-shot recognition, a classifier that has been trained on one set ofclasses is required to rapidly adapt and generalize to a disjoint, novel set ofclasses. To that end, recent studies have shown the efficacy of fine-tuningwith carefully crafted adaptation architectures. However this raises thequestion of: How can one design the optimal adaptation strategy? In this paper,we study this question through the lens of neural architecture search (NAS).Given a pre-trained neural network, our algorithm discovers the optimalarrangement of adapters, which layers to keep frozen and which to fine-tune. Wedemonstrate the generality of our NAS method by applying it to both residualnetworks and vision transformers and report state-of-the-art performance onMeta-Dataset and Meta-Album.",,arXiv,"['cs.cv', 'cs.lg']",, multilingual fewshot learning via language model retrieval,"['Genta Indra Winata', 'Liang-Kang Huang', 'Soumya Vadlamannati', 'Yash Chandarana']",http://arxiv.org/pdf/2306.10964v1.pdf,2023-06-19,," Transformer-based language models have achieved remarkable success infew-shot in-context learning and drawn a lot of research interest. However,these models' performance greatly depends on the choice of the example promptsand also has high variability depending on how samples are chosen. In thispaper, we conduct a comprehensive study of retrieving semantically similarfew-shot samples and using them as the context, as it helps the model decidethe correct label without any gradient update in the multilingual andcross-lingual settings. We evaluate the proposed method on five naturallanguage understanding datasets related to intent detection, questionclassification, sentiment analysis, and topic classification. The proposedmethod consistently outperforms random sampling in monolingual andcross-lingual tasks in non-English languages.",,arXiv,['cs.cl'],, robut a systematic study of table qa robustness against humanannotated adversarial perturbations,"['Yilun Zhao', 'Chen Zhao', 'Linyong Nan', 'Zhenting Qi', 'Wenlin Zhang', 'Xiangru Tang', 'Boyu Mi', 'Dragomir Radev']",http://arxiv.org/pdf/2306.14321v1.pdf,2023-06-25,," Despite significant progress having been made in question answering ontabular data (Table QA), it's unclear whether, and to what extent existingTable QA models are robust to task-specific perturbations, e.g., replacing keyquestion entities or shuffling table columns. To systematically study therobustness of Table QA models, we propose a benchmark called RobuT, whichbuilds upon existing Table QA datasets (WTQ, WikiSQL-Weak, and SQA) andincludes human-annotated adversarial perturbations in terms of table header,table content, and question. Our results indicate that both state-of-the-artTable QA models and large language models (e.g., GPT-3) with few-shot learningfalter in these adversarial sets. We propose to address this problem by usinglarge language models to generate adversarial examples to enhance training,which significantly improves the robustness of Table QA models. Our data andcode is publicly available at https://github.com/yilunzhao/RobuT.",,arXiv,"['cs.cl', 'cs.ai']",, benchmarking large language model capabilities for conditional generation,"['Joshua Maynez', 'Priyanka Agrawal', 'Sebastian Gehrmann']",http://arxiv.org/pdf/2306.16793v1.pdf,2023-06-29,," Pre-trained large language models (PLMs) underlie most new developments innatural language processing. They have shifted the field fromapplication-specific model pipelines to a single model that is adapted to awide range of tasks. Autoregressive PLMs like GPT-3 or PaLM, alongsidetechniques like few-shot learning, have additionally shifted the outputmodality to generation instead of classification or regression. Despite theirubiquitous use, the generation quality of language models is rarely evaluatedwhen these models are introduced. Additionally, it is unclear how existinggeneration tasks--while they can be used to compare systems at a highlevel--relate to the real world use cases for which people have been adoptingthem. In this work, we discuss how to adapt existing application-specificgeneration benchmarks to PLMs and provide an in-depth, empirical study of thelimitations and capabilities of PLMs in natural language generation tasks alongdimensions such as scale, architecture, input and output language. Our resultsshow that PLMs differ in their applicability to different data regimes andtheir generalization to multiple languages and inform which PLMs to use for agiven generation task setup. We share best practices to be taken intoconsideration when benchmarking generation capabilities during the developmentof upcoming PLMs.",,arXiv,['cs.cl'],, on conditional and compositional language model differentiable prompting,"['Jonathan Pilault', 'Can Liu', 'Mohit Bansal', 'Markus Dreyer']",http://arxiv.org/pdf/2307.01446v1.pdf,2023-07-04,," Prompts have been shown to be an effective method to adapt a frozenPretrained Language Model (PLM) to perform well on downstream tasks. Promptscan be represented by a human-engineered word sequence or by a learnedcontinuous embedding. In this work, we investigate conditional andcompositional differentiable prompting. We propose a new model, PromptProduction System (PRopS), which learns to transform task instructions or inputmetadata, into continuous prompts that elicit task-specific outputs from thePLM. Our model uses a modular network structure based on our neural formulationof Production Systems, which allows the model to learn discrete rules -- neuralfunctions that learn to specialize in transforming particular prompt inputpatterns, making it suitable for compositional transfer learning and few-shotlearning. We present extensive empirical and theoretical analysis and show thatPRopS consistently surpasses other PLM adaptation techniques, and oftenimproves upon fully fine-tuned models, on compositional generalization tasks,controllable summarization and multilingual translation, while needing fewertrainable parameters.",,arXiv,"['cs.cl', 'cs.lg']",, diverse retrievalaugmented incontext learning for dialogue state tracking,"['Brendan King', 'Jeffrey Flanigan']",http://arxiv.org/pdf/2307.01453v1.pdf,2023-07-04,," There has been significant interest in zero and few-shot learning fordialogue state tracking (DST) due to the high cost of collecting and annotatingtask-oriented dialogues. Recent work has demonstrated that in-context learningrequires very little data and zero parameter updates, and even outperformstrained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST,which advances the state of the art with three advancements to in-contextlearning for DST. First, we formulate DST as a Python programming task,explicitly modeling language coreference as variable reference in Python.Second, since in-context learning depends highly on the context examples, wepropose a method to retrieve a diverse set of relevant examples to improveperformance. Finally, we introduce a novel re-weighting method during decodingthat takes into account probabilities of competing surface forms, and producesa more accurate dialogue state prediction. We evaluate our approach usingMultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zeroand few-shot settings.",,arXiv,['cs.cl'],, generating efficient training data via llmbased attribute manipulation,"['Letian Peng', 'Yuwei Zhang', 'Jingbo Shang']",http://arxiv.org/pdf/2307.07099v1.pdf,2023-07-14,," In this paper, we propose a novel method, Chain-of-Thoughts AttributeManipulation (CoTAM), to guide few-shot learning by carefully crafted data fromLarge Language Models (LLMs). The main idea is to create data with changes onlyin the attribute targeted by the task. Inspired by facial attributemanipulation, our approach generates label-switched data by leveraging LLMs tomanipulate task-specific attributes and reconstruct new sentences in acontrolled manner. Instead of conventional latent representation controlling,we implement chain-of-thoughts decomposition and reconstruction to adapt theprocedure to LLMs. Extensive results on text classification and other tasksverify the advantage of CoTAM over other LLM-based text generation methods withthe same number of training examples. Analysis visualizes the attributemanipulation effectiveness of CoTAM and presents the potential of LLM-guidedlearning with even less supervision.",,arXiv,['cs.cl'],, overthinking the truth understanding how language models process false demonstrations,"['Danny Halawi', 'Jean-Stanislas Denain', 'Jacob Steinhardt']",http://arxiv.org/pdf/2307.09476v1.pdf,2023-07-18,," Modern language models can imitate complex patterns through few-shotlearning, enabling them to complete challenging tasks without fine-tuning.However, imitation can also lead models to reproduce inaccuracies or harmfulcontent if present in the context. We study harmful imitation through the lensof a model's internal representations, and identify two related phenomena:overthinking and false induction heads. The first phenomenon, overthinking,appears when we decode predictions from intermediate layers, given correct vs.incorrect few-shot demonstrations. At early layers, both demonstrations inducesimilar model behavior, but the behavior diverges sharply at some ""criticallayer"", after which the accuracy given incorrect demonstrations progressivelydecreases. The second phenomenon, false induction heads, are a possiblemechanistic cause of overthinking: these are heads in late layers that attendto and copy false information from previous demonstrations, and whose ablationreduces overthinking. Beyond scientific understanding, our results suggest thatstudying intermediate model computations could be a promising avenue forunderstanding and guarding against harmful model behaviors.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, does correction remain a problem for large language models,"['Xiaowu Zhang', 'Xiaotian Zhang', 'Cheng Yang', 'Hang Yan', 'Xipeng Qiu']",http://arxiv.org/pdf/2308.01776v2.pdf,2023-08-03,," As large language models, such as GPT, continue to advance the capabilitiesof natural language processing (NLP), the question arises: does the problem ofcorrection still persist? This paper investigates the role of correction in thecontext of large language models by conducting two experiments. The firstexperiment focuses on correction as a standalone task, employing few-shotlearning techniques with GPT-like models for error correction. The secondexperiment explores the notion of correction as a preparatory task for otherNLP tasks, examining whether large language models can tolerate and performadequately on texts containing certain levels of noise or errors. By addressingthese experiments, we aim to shed light on the significance of correction inthe era of large language models and its implications for various NLPapplications.",,arXiv,['cs.cl'],, thespian multicharacter text roleplaying game agents,"['Christopher Cui', 'Xiangyu Peng', 'Mark Riedl']",http://arxiv.org/pdf/2308.01872v1.pdf,2023-08-03,," Text-adventure games and text role-playing games are grand challenges forreinforcement learning game playing agents. Text role-playing games areopen-ended environments where an agent must faithfully play a particularcharacter. We consider the distinction between characters and actors, where anactor agent has the ability to play multiple characters. We present a frameworkwe call a thespian agent that can learn to emulate multiple characters alongwith a soft prompt that can be used to direct it as to which character to playat any time. We further describe an attention mechanism that allows the agentto learn new characters that are based on previously learned characters in afew-shot fashion. We show that our agent outperforms the state of the art agentframework in multi-character learning and few-shot learning.",,arXiv,"['cs.ai', 'cs.cl']",, metalearning in healthcare a survey,"['Alireza Rafiei', 'Ronald Moore', 'Sina Jahromi', 'Farshid Hajati', 'Rishikesan Kamaleswaran']",http://arxiv.org/pdf/2308.02877v1.pdf,2023-08-05,," As a subset of machine learning, meta-learning, or learning to learn, aims atimproving the model's capabilities by employing prior knowledge and experience.A meta-learning paradigm can appropriately tackle the conventional challengesof traditional learning approaches, such as insufficient number of samples,domain shifts, and generalization. These unique characteristics positionmeta-learning as a suitable choice for developing influential solutions invarious healthcare contexts, where the available data is often insufficient,and the data collection methodologies are different. This survey discussesmeta-learning broad applications in the healthcare domain to provide insightinto how and where it can address critical healthcare challenges. We firstdescribe the theoretical foundations and pivotal methods of meta-learning. Wethen divide the employed meta-learning approaches in the healthcare domain intotwo main categories of multi/single-task learning and many/few-shot learningand survey the studies. Finally, we highlight the current challenges inmeta-learning research, discuss the potential solutions and provide futureperspectives on meta-learning in healthcare.",,arXiv,"['cs.lg', 'cs.ai']",, autoconv automatically generating informationseeking conversations with large language models,"['Siheng Li', 'Cheng Yang', 'Yichun Yin', 'Xinyu Zhu', 'Zesen Cheng', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu', 'Yujiu Yang']",http://arxiv.org/pdf/2308.06507v1.pdf,2023-08-12,," Information-seeking conversation, which aims to help users gather informationthrough conversation, has achieved great progress in recent years. However, theresearch is still stymied by the scarcity of training data. To alleviate thisproblem, we propose AutoConv for synthetic conversation generation, which takesadvantage of the few-shot learning ability and generation capacity of largelanguage models (LLM). Specifically, we formulate the conversation generationproblem as a language modeling task, then finetune an LLM with a few humanconversations to capture the characteristics of the information-seeking processand use it for generating synthetic conversations with high quality.Experimental results on two frequently-used datasets verify that AutoConv hassubstantial improvements over strong baselines and alleviates the dependence onhuman annotation. In addition, we also provide several analysis studies topromote future research.",,arXiv,['cs.cl'],, distilled feature fields enable fewshot languageguided manipulation,"['William Shen', 'Ge Yang', 'Alan Yu', 'Jansen Wong', 'Leslie Pack Kaelbling', 'Phillip Isola']",http://arxiv.org/pdf/2308.07931v2.pdf,2023-07-27,," Self-supervised and language-supervised image models contain rich knowledgeof the world that is important for generalization. Many robotic tasks, however,require a detailed understanding of 3D geometry, which is often lacking in 2Dimage features. This work bridges this 2D-to-3D gap for robotic manipulation byleveraging distilled feature fields to combine accurate 3D geometry with richsemantics from 2D foundation models. We present a few-shot learning method for6-DOF grasping and placing that harnesses these strong spatial and semanticpriors to achieve in-the-wild generalization to unseen objects. Using featuresdistilled from a vision-language model, CLIP, we present a way to designatenovel objects for manipulation via free-text natural language, and demonstrateits ability to generalize to unseen expressions and novel categories ofobjects.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",, refashioning emotion recognition modelling the advent of generalised large models,"['Zixing Zhang', 'Liyizhe Peng', 'Tao Pang', 'Jing Han', 'Huan Zhao', 'Bjorn W. Schuller']",http://arxiv.org/pdf/2308.11578v1.pdf,2023-08-21,," After the inception of emotion recognition or affective computing, it hasincreasingly become an active research topic due to its broad applications.Over the past couple of decades, emotion recognition models have graduallymigrated from statistically shallow models to neural network-based deep models,which can significantly boost the performance of emotion recognition models andconsistently achieve the best results on different benchmarks. Therefore, inrecent years, deep models have always been considered the first option foremotion recognition. However, the debut of large language models (LLMs), suchas ChatGPT, has remarkably astonished the world due to their emergedcapabilities of zero/few-shot learning, in-context learning, chain-of-thought,and others that are never shown in previous deep models. In the present paper,we comprehensively investigate how the LLMs perform in emotion recognition interms of diverse aspects, including in-context learning, few-short learning,accuracy, generalisation, and explanation. Moreover, we offer some insights andpose other potential challenges, hoping to ignite broader discussions aboutenhancing emotion recognition in the new era of advanced and generalised largemodels.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles,"['Georgi Pachov', 'Dimitar Dimitrov', 'Ivan Koychev', 'Preslav Nakov']",http://arxiv.org/pdf/2309.06844v1.pdf,2023-09-13,," The wide-spread use of social networks has given rise to subjective,misleading, and even false information on the Internet. Thus, subjectivitydetection can play an important role in ensuring the objectiveness and thequality of a piece of information. This paper presents the solution built bythe Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivitydetection. Three different research directions are explored. The first one isbased on fine-tuning a sentence embeddings encoder model and dimensionalityreduction. The second one explores a sample-efficient few-shot learning model.The third one evaluates fine-tuning a multilingual transformer on an altereddataset, using data from multiple languages. Finally, the three approaches arecombined in a simple majority voting ensemble, resulting in 0.77 macro F1 onthe test set and achieving 2nd place on the English subtask.",,arXiv,"['cs.cl', 'cs.ai', 'cs.mm']",, "an empathybased sandbox approach to bridge attitudes, goals, knowledge, and behaviors in the privacy paradox","['Chaoran Chen', 'Weijun Li', 'Wenxin Song', 'Yanfang Ye', 'Yaxing Yao', 'Toby Jia-jun Li']",http://arxiv.org/pdf/2309.14510v1.pdf,2023-09-25,," The ""privacy paradox"" describes the discrepancy between users' privacyattitudes and their actual behaviors. Mitigating this discrepancy requiressolutions that account for both system opaqueness and users' hesitations intesting different privacy settings due to fears of unintended data exposure. Weintroduce an empathy-based approach that allows users to experience how privacybehaviors may alter system outcomes in a risk-free sandbox environment from theperspective of artificially generated personas. To generate realistic personas,we introduce a novel pipeline that augments the outputs of large languagemodels using few-shot learning, contextualization, and chain of thoughts. Ourempirical studies demonstrated the adequate quality of generated personas andhighlighted the changes in privacy-related applications (e.g., onlineadvertising) caused by different personas. Furthermore, users demonstratedcognitive and emotional empathy towards the personas when interacting with oursandbox. We offered design implications for downstream applications inimproving user privacy literacy and promoting behavior changes.",,arXiv,['cs.hc'],, injecting a structural inductive bias into a seq2seq model by simulation,"['Matthias Lindemann', 'Alexander Koller', 'Ivan Titov']",http://arxiv.org/pdf/2310.00796v2.pdf,2023-10-01,," Strong inductive biases enable learning from little data and helpgeneralization outside of the training distribution. Popular neuralarchitectures such as Transformers lack strong structural inductive biases forseq2seq NLP tasks on their own. Consequently, they struggle with systematicgeneralization beyond the training distribution, e.g. with extrapolating tolonger inputs, even when pre-trained on large amounts of text. We show how astructural inductive bias can be efficiently injected into a seq2seq model bypre-training it to simulate structural transformations on synthetic data.Specifically, we inject an inductive bias towards Finite State Transducers(FSTs) into a Transformer by pre-training it to simulate FSTs given theirdescriptions. Our experiments show that our method imparts the desiredinductive bias, resulting in improved systematic generalization and betterfew-shot learning for FST-like tasks. Our analysis shows that fine-tuned modelsaccurately capture the state dynamics of the unseen underlying FSTs, suggestingthat the simulation process is internalized by the fine-tuned model.",,arXiv,['cs.cl'],, tram benchmarking temporal reasoning for large language models,"['Yuqing Wang', 'Yun Zhao']",http://arxiv.org/pdf/2310.00835v2.pdf,2023-10-02,," Reasoning about time is essential for understanding the nuances of eventsdescribed in natural language. Previous research on this topic has been limitedin scope, characterized by a lack of standardized benchmarks that would allowfor consistent evaluations across different studies. In this paper, weintroduce TRAM, a temporal reasoning benchmark composed of ten datasets,encompassing various temporal aspects of events such as order, arithmetic,frequency, and duration, designed to facilitate a comprehensive evaluation ofthe temporal reasoning capabilities of large language models (LLMs). We conductan extensive evaluation using popular LLMs, such as GPT-4 and Llama2, in bothzero-shot and few-shot learning scenarios. Additionally, we employ BERT-basedmodels to establish the baseline evaluations. Our findings indicate that thesemodels still trail human performance in temporal reasoning tasks. It is ouraspiration that TRAM will spur further progress in enhancing the temporalreasoning abilities of LLMs.",,arXiv,['cs.cl'],, procedural text mining with large language models,"['Anisa Rula', ""Jennifer D'Souza""]",http://arxiv.org/pdf/2310.03376v1.pdf,2023-10-05,," Recent advancements in the field of Natural Language Processing, particularlythe development of large-scale language models that are pretrained on vastamounts of knowledge, are creating novel opportunities within the realm ofKnowledge Engineering. In this paper, we investigate the usage of largelanguage models (LLMs) in both zero-shot and in-context learning settings totackle the problem of extracting procedures from unstructured PDF text in anincremental question-answering fashion. In particular, we leverage the currentstate-of-the-art GPT-4 (Generative Pre-trained Transformer 4) model,accompanied by two variations of in-context learning that involve an ontologywith definitions of procedures and steps and a limited number of samples offew-shot learning. The findings highlight both the promise of this approach andthe value of the in-context learning customisations. These modifications havethe potential to significantly address the challenge of obtaining sufficienttraining data, a hurdle often encountered in deep learning-based NaturalLanguage Processing techniques for procedure extraction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.it', 'math.it']",, prototypeformer learning to explore prototype relationships for fewshot image classification,"['Feihong He', 'Gang Li', 'Lingyu Si', 'Leilei Yan', 'Fanzhang Li', 'Fuchun Sun']",http://arxiv.org/pdf/2310.03517v1.pdf,2023-10-05,," Few-shot image classification has received considerable attention foraddressing the challenge of poor classification performance with limitedsamples in novel classes. However, numerous studies have employed sophisticatedlearning strategies and diversified feature extraction methods to address thisissue. In this paper, we propose our method called PrototypeFormer, which aimsto significantly advance traditional few-shot image classification approachesby exploring prototype relationships. Specifically, we utilize a transformerarchitecture to build a prototype extraction module, aiming to extract classrepresentations that are more discriminative for few-shot classification.Additionally, during the model training process, we propose a contrastivelearning-based optimization approach to optimize prototype features in few-shotlearning scenarios. Despite its simplicity, the method performs remarkablywell, with no bells and whistles. We have experimented with our approach onseveral popular few-shot image classification benchmark datasets, which showsthat our method outperforms all current state-of-the-art methods. Inparticular, our method achieves 97.07% and 90.88% on 5-way 5-shot and 5-way1-shot tasks of miniImageNet, which surpasses the state-of-the-art results withaccuracy of 7.27% and 8.72%, respectively. The code will be released later.",,arXiv,['cs.cv'],, a holistic evaluation of piano sound quality,"['Monan Zhou', 'Shangda Wu', 'Shaohua Ji', 'Zijin Li', 'Wei Li']",http://arxiv.org/pdf/2310.04722v1.pdf,2023-10-07,," This paper aims to develop a holistic evaluation method for piano soundquality to assist in purchasing decisions. Unlike previous studies that focusedon the effect of piano performance techniques on sound quality, this studyevaluates the inherent sound quality of different pianos. To derive qualityevaluation systems, the study uses subjective questionnaires based on a pianosound quality dataset. The method selects the optimal piano classificationmodels by comparing the fine-tuning results of different pre-training models ofConvolutional Neural Networks (CNN). To improve the interpretability of themodels, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. Theresults reveal that musically trained individuals are better able todistinguish between the sound quality differences of different pianos. The bestfine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3\% as thepiano classifier. However, the dataset is limited, and the audio is sliced toincrease its quantity, resulting in a lack of diversity and balance, so we usefocal loss to reduce the impact of data imbalance. To optimize the method, thedataset will be expanded, or few-shot learning techniques will be employed infuture research.",,arXiv,"['cs.sd', 'cs.ai', 'eess.as']",, argumentative stance prediction an exploratory study on multimodality and fewshot learning,"['Arushi Sharma', 'Abhibha Gupta', 'Maneesh Bilalpur']",http://arxiv.org/pdf/2310.07093v1.pdf,2023-10-11,," To advance argumentative stance prediction as a multimodal problem, the FirstShared Task in Multimodal Argument Mining hosted stance prediction in crucialsocial topics of gun control and abortion. Our exploratory study attempts toevaluate the necessity of images for stance prediction in tweets and compareout-of-the-box text-based large-language models (LLM) in few-shot settingsagainst fine-tuned unimodal and multimodal models. Our work suggests anensemble of fine-tuned text-based language models (0.817 F1-score) outperformsboth the multimodal (0.677 F1-score) and text-based few-shot prediction using arecent state-of-the-art LLM (0.550 F1-score). In addition to the differences inperformance, our findings suggest that the multimodal models tend to performbetter when image content is summarized as natural language over their nativepixel structure and, using in-context examples improves few-shot performance ofLLMs.",,arXiv,['cs.cl'],, llmaugmented preference learning from natural language,"['Inwon Kang', 'Sikai Ruan', 'Tyler Ho', 'Jui-Chien Lin', 'Farhad Mohsin', 'Oshani Seneviratne', 'Lirong Xia']",http://arxiv.org/pdf/2310.08523v1.pdf,2023-10-12,," Finding preferences expressed in natural language is an important butchallenging task. State-of-the-art(SotA) methods leverage transformer-basedmodels such as BERT, RoBERTa, etc. and graph neural architectures such as graphattention networks. Since Large Language Models (LLMs) are equipped to dealwith larger context lengths and have much larger model sizes than thetransformer-based model, we investigate their ability to classify comparativetext directly. This work aims to serve as a first step towards using LLMs forthe CPC task. We design and conduct a set of experiments that format theclassification task into an input prompt for the LLM and a methodology to get afixed-format response that can be automatically evaluated. Comparingperformances with existing methods, we see that pre-trained LLMs are able tooutperform the previous SotA models with no fine-tuning involved. Our resultsshow that the LLMs can consistently outperform the SotA when the target text islarge -- i.e. composed of multiple sentences --, and are still comparable tothe SotA performance in shorter text. We also find that few-shot learningyields better performance than zero-shot learning.",,arXiv,['cs.cl'],, incontext learning for fewshot molecular property prediction,"['Christopher Fifty', 'Jure Leskovec', 'Sebastian Thrun']",http://arxiv.org/pdf/2310.08863v1.pdf,2023-10-13,," In-context learning has become an important approach for few-shot learning inLarge Language Models because of its ability to rapidly adapt to new taskswithout fine-tuning model parameters. However, it is restricted to applicationsin natural language and inapplicable to other domains. In this paper, we adaptthe concepts underpinning in-context learning to develop a new algorithm forfew-shot molecular property prediction. Our approach learns to predictmolecular properties from a context of (molecule, property measurement) pairsand rapidly adapts to new properties without fine-tuning. On the FS-Mol andBACE molecular property prediction benchmarks, we find this method surpassesthe performance of recent meta-learning algorithms at small support sizes andis competitive with the best methods at large support sizes.",,arXiv,['cs.lg'],, group preference optimization fewshot alignment of large language models,"['Siyan Zhao', 'John Dang', 'Aditya Grover']",http://arxiv.org/pdf/2310.11523v1.pdf,2023-10-17,," Many applications of large language models (LLMs), ranging from chatbots tocreative writing, require nuanced subjective judgments that can differsignificantly across different groups. Existing alignment algorithms can beexpensive to align for each group, requiring prohibitive amounts ofgroup-specific preference data and computation for real-world use cases. Weintroduce Group Preference Optimization (GPO), an alignment framework thatsteers language models to preferences of individual groups in a few-shotmanner. In GPO, we augment the base LLM with an independent transformer moduletrained to predict the preferences of a group for the LLM generations. Forfew-shot learning, we parameterize this module as an in-context autoregressivetransformer and train it via meta-learning on several groups. We empiricallyvalidate the efficacy of GPO through rigorous evaluations using LLMs withvaried sizes on three human opinion adaptation tasks. These tasks involveadapting to the preferences of US demographic groups, global countries, andindividual users. Our results demonstrate that GPO not only aligns models moreaccurately but also requires fewer group-specific preferences, and lesstraining and inference computing resources, outperforming existing strategiessuch as in-context steering and fine-tuning methods.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, clara multilingual contrastive learning for audio representation acquisition,"['Kari A Noriy', 'Xiaosong Yang', 'Marcin Budka', 'Jian Jun Zhang']",http://arxiv.org/pdf/2310.11830v2.pdf,2023-10-18,," Multilingual speech processing requires understanding emotions, a task madedifficult by limited labelled data. CLARA, minimizes reliance on labelled data,enhancing generalization across languages. It excels at fostering sharedrepresentations, aiding cross-lingual transfer of speech and emotions, evenwith little data. Our approach adeptly captures emotional nuances in speech,overcoming subjective assessment issues. Using a large multilingual audiocorpus and self-supervised learning, CLARA develops speech representationsenriched with emotions, advancing emotion-aware multilingual speech processing. Our method expands the data range using data augmentation, textual embeddingfor visual understanding, and transfers knowledge from high- to low-resourcelanguages. CLARA demonstrates excellent performance in emotion recognition,language comprehension, and audio benchmarks, excelling in zero-shot andfew-shot learning. It adapts to low-resource languages, marking progress inmultilingual speech representation learning.",,arXiv,"['cs.sd', 'cs.lg', 'cs.mm', 'eess.as']",, a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation,"['Giuseppe Attanasio', 'Flor Miriam Plaza-del-Arco', 'Debora Nozza', 'Anne Lauscher']",http://arxiv.org/pdf/2310.12127v2.pdf,2023-10-18,," Recent instruction fine-tuned models can solve multiple NLP tasks whenprompted to do so, with machine translation (MT) being a prominent use case.However, current research often focuses on standard performance benchmarks,leaving compelling fairness and ethical considerations behind. In MT, thismight lead to misgendered translations, resulting, among other harms, in theperpetuation of stereotypes and prejudices. In this work, we address this gapby investigating whether and to what extent such models exhibit gender bias inmachine translation and how we can mitigate it. Concretely, we computeestablished gender bias metrics on the WinoMT corpus from English to German andSpanish. We discover that IFT models default to male-inflected translations,even disregarding female occupational stereotypes. Next, using interpretabilitymethods, we unveil that models systematically overlook the pronoun indicatingthe gender of a target occupation in misgendered translations. Finally, basedon this finding, we propose an easy-to-implement and effective bias mitigationsolution based on few-shot learning that leads to significantly fairertranslations.",,arXiv,"['cs.cl', 'cs.lg']",, an exploration of incontext learning for speech language model,"['Ming-Hao Hsu', 'Kai-Wei Chang', 'Shang-Wen Li', 'Hung-yi Lee']",http://arxiv.org/pdf/2310.12477v1.pdf,2023-10-19,," Ever since the development of GPT-3 in the natural language processing (NLP)field, in-context learning (ICL) has played an important role in utilizinglarge language models (LLMs). By presenting the LM utterance-labeldemonstrations at the input, the LM can accomplish few-shot learning withoutrelying on gradient descent or requiring explicit modification of itsparameters. This enables the LM to learn and adapt in a black-box manner.Despite the success of ICL in NLP, little work is exploring the possibility ofICL in speech processing. This study proposes the first exploration of ICL witha speech LM without text supervision. We first show that the current speech LMdoes not have the ICL capability. With the proposed warmup training, the speechLM can, therefore, perform ICL on unseen tasks. In this work, we verify thefeasibility of ICL for speech LM on speech classification tasks.",,arXiv,"['eess.as', 'cs.ai', 'cs.cl']",, improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning,"['Ananth Balashankar', 'Xiao Ma', 'Aradhana Sinha', 'Ahmad Beirami', 'Yao Qin', 'Jilin Chen', 'Alex Beutel']",http://arxiv.org/pdf/2310.16959v1.pdf,2023-10-25,," As large language models (LLMs) are widely adopted, new safety issues andpolicies emerge, to which existing safety classifiers do not generalize well.If we have only observed a few examples of violations of a new safety rule, howcan we build a classifier to detect violations? In this paper, we study thenovel setting of domain-generalized few-shot learning for LLM-based text safetyclassifiers. Unlike prior few-shot work, these new safety issues can be hard touncover and we do not get to choose the few examples. We demonstrate thatexisting few-shot techniques do not perform well in this setting, and rather wepropose to do parameter-efficient fine-tuning (PEFT) combined with augmentingtraining data based on similar examples in prior existing rules. We empiricallyshow that our approach of similarity-based data-augmentation + prompt-tuning(DAPT) consistently outperforms baselines that either do not rely on dataaugmentation or on PEFT by 7-17% F1 score in the Social Chemistry moraljudgement and 9-13% AUC in the Toxicity detection tasks, even when the new ruleis loosely correlated with existing ones.",,arXiv,['cs.lg'],, retrofitting lightweight language models for emotions using supervised contrastive learning,"['Sapan Shah', 'Sreedhar Reddy', 'Pushpak Bhattacharyya']",http://arxiv.org/pdf/2310.18930v1.pdf,2023-10-29,," We present a novel retrofitting method to induce emotion aspects intopre-trained language models (PLMs) such as BERT and RoBERTa. Our method updatespre-trained network weights using contrastive learning so that the textfragments exhibiting similar emotions are encoded nearby in the representationspace, and the fragments with different emotion content are pushed apart. Whiledoing so, it also ensures that the linguistic knowledge already present in PLMsis not inadvertently perturbed. The language models retrofitted by our method,i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, asevaluated through different clustering and retrieval metrics. For thedownstream tasks on sentiment analysis and sarcasm detection, they performbetter than their pre-trained counterparts (about 1% improvement in F1-score)and other existing approaches. Additionally, a more significant boost inperformance is observed for the retrofitted models over pre-trained ones infew-shot learning setting.",,arXiv,['cs.cl'],, nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection,"['Yunze Xiao', 'Firoj Alam']",http://arxiv.org/pdf/2311.03184v1.pdf,2023-11-06,," The spread of disinformation and propagandistic content poses a threat tosocietal harmony, undermining informed decision-making and trust in reliablesources. Online platforms often serve as breeding grounds for such content, andmalicious actors exploit the vulnerabilities of audiences to shape publicopinion. Although there have been research efforts aimed at the automaticidentification of disinformation and propaganda in social media content, thereremain challenges in terms of performance. The ArAIEval shared task aims tofurther research on these particular issues within the context of the Arabiclanguage. In this paper, we discuss our participation in these shared tasks. Wecompeted in subtasks 1A and 2A, where our submitted system secured positions9th and 10th, respectively. Our experiments consist of fine-tuning transformermodels and using zero- and few-shot learning with GPT-4.",,arXiv,"['cs.cl', 'cs.ai', 'cs.si', '68t50', 'f.2.2; i.2.7']",, multilingual mathematical autoformalization,"['Albert Q. Jiang', 'Wenda Li', 'Mateja Jamnik']",http://arxiv.org/pdf/2311.03755v2.pdf,2023-11-07,," Autoformalization is the task of translating natural language materials intomachine-verifiable formalisations. Progress in autoformalization research ishindered by the lack of a sizeable dataset consisting of informal-formal pairsexpressing the same essence. Existing methods tend to circumvent this challengeby manually curating small corpora or using few-shot learning with largelanguage models. But these methods suffer from data scarcity and formallanguage acquisition difficulty. In this work, we create $\texttt{MMA}$, alarge, flexible, multilingual, and multi-domain dataset of informal-formalpairs, by using a language model to translate in the reverse direction, thatis, from formal mathematical statements into corresponding informal ones.Experiments show that language models fine-tuned on $\texttt{MMA}$ produce$16-18\%$ of statements acceptable with minimal corrections on the$\texttt{miniF2F}$ and $\texttt{ProofNet}$ benchmarks, up from $0\%$ with thebase model. We demonstrate that fine-tuning on multilingual formal data resultsin more capable autoformalization models even when deployed on monolingualtasks.",,arXiv,"['cs.cl', 'cs.lg']",, dataefficient goaloriented conversation with dialogue knowledge transfer networks,"['Igor Shalyminov', 'Sungjin Lee', 'Arash Eshghi', 'Oliver Lemon']",http://arxiv.org/pdf/1910.01302v1.pdf,2019-10-03,," Goal-oriented dialogue systems are now being widely adopted in industry whereit is of key importance to maintain a rapid prototyping cycle for new productsand domains. Data-driven dialogue system development has to be adapted to meetthis requirement --- therefore, reducing the amount of data and annotationsnecessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet),a state-of-the-art approach to goal-oriented dialogue generation which onlyuses a few example dialogues (i.e. few-shot learning), none of which has to beannotated. We achieve this by performing a 2-stage training. Firstly, weperform unsupervised dialogue representation pre-training on a large source ofgoal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, atthe transfer stage, we train DiKTNet using this representation together with 2other textual knowledge sources with different levels of generality: ELMoencoder and the main dataset's source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluateour model on it in terms of BLEU and Entity F1 scores, and show that ourapproach significantly and consistently improves upon a series of baselinemodels as well as over the previous state-of-the-art dialogue generation model,ZSDG. The improvement upon the latter --- up to 10% in Entity F1 and theaverage of 3% in BLEU score --- is achieved using only the equivalent of 10% ofZSDG's in-domain training data.",,arXiv,"['cs.cl', 'i.2.7']",, metalearning with dynamicmemorybased prototypical network for fewshot event detection,"['Shumin Deng', 'Ningyu Zhang', 'Jiaojian Kang', 'Yichi Zhang', 'Wei Zhang', 'Huajun Chen']",http://arxiv.org/pdf/1910.11621v2.pdf,2019-10-25,," Event detection (ED), a sub-task of event extraction, involves identifyingtriggers and categorizing event mentions. Existing methods primarily rely uponsupervised learning and require large-scale labeled event datasets which areunfortunately not readily available in many real-life applications. In thispaper, we consider and reformulate the ED task with limited labeled data as aFew-Shot Learning problem. We propose a Dynamic-Memory-Based PrototypicalNetwork (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learnbetter prototypes for event types, but also produce more robust sentenceencodings for event mentions. Differing from vanilla prototypical networkssimply computing event prototypes by averaging, which only consume eventmentions once, our model is more robust and is capable of distilling contextualinformation from event mentions for multiple times due to the multi-hopmechanism of DMNs. The experiments show that DMB-PN not only deals with samplescarcity better than a series of baseline models but also performs morerobustly when the variety of event types is relatively large and the instancequantity is extremely small.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning,"['Sadaf Gull', 'Fayyaz Minhas']",http://arxiv.org/pdf/1911.06106v1.pdf,2019-10-28,," The evolution of drug-resistant microbial species is one of the majorchallenges to global health. The development of new antimicrobial treatmentssuch as antimicrobial peptides needs to be accelerated to combat this threat.However, the discovery of novel antimicrobial peptides is hampered bylow-throughput biochemical assays. Computational techniques can be used forrapid screening of promising antimicrobial peptide candidates prior to testingin the wet lab. The vast majority of existing antimicrobial peptide predictorsare non-targeted in nature, i.e., they can predict whether a given peptidesequence is antimicrobial, but they are unable to predict whether the sequencecan target a particular microbial species. In this work, we have developed atargeted antimicrobial peptide activity predictor that can predict whether apeptide is effective against a given microbial species or not. This has beenmade possible through zero-shot and few-shot machine learning. The proposedpredictor called AMP0 takes in the peptide amino acid sequence and anyN/C-termini modifications together with the genomic sequence of a targetmicrobial species to generate targeted predictions. It is important to notethat the proposed method can generate predictions for species that are not partof its training set. The accuracy of predictions for novel test species can befurther improved by providing a few example peptides for that species. Ourcomputational cross-validation results show that the pro-posed scheme isparticularly effective for targeted antimicrobial prediction in comparison toexisting approaches and can be used for screening potential antimicrobialpeptides in a targeted manner especially for cases in which the number oftraining examples is small. The webserver of the method is available athttp://ampzero.pythonanywhere.com.",,arXiv,"['q-bio.bm', 'cs.lg', 'stat.ml']",, what makes good incontext examples for gpt$3$,"['Jiachang Liu', 'Dinghan Shen', 'Yizhe Zhang', 'Bill Dolan', 'Lawrence Carin', 'Weizhu Chen']",http://arxiv.org/pdf/2101.06804v1.pdf,2021-01-17,," GPT-$3$ has attracted lots of attention due to its superior performanceacross a wide range of NLP tasks, especially with its powerful and versatilein-context few-shot learning ability. Despite its success, we found that theempirical results of GPT-$3$ depend heavily on the choice of in-contextexamples. In this work, we investigate whether there are more effectivestrategies for judiciously selecting in-context examples (relative to randomsampling) that better leverage GPT-$3$'s few-shot capabilities. Inspired by therecent success of leveraging a retrieval module to augment large-scale neuralnetwork models, we propose to retrieve examples that are semantically-similarto a test sample to formulate its corresponding prompt. Intuitively, thein-context examples selected with such a strategy may serve as more informativeinputs to unleash GPT-$3$'s extensive knowledge. We evaluate the proposedapproach on several natural language understanding and generation benchmarks,where the retrieval-based prompt selection approach consistently outperformsthe random baseline. Moreover, it is observed that the sentence encodersfine-tuned on task-related datasets yield even more helpful retrieval results.Notably, significant gains are observed on tasks such as table-to-textgeneration (41.9% on the ToTTo dataset) and open-domain question answering(45.5% on the NQ dataset). We hope our investigation could help understand thebehaviors of GPT-$3$ and large-scale pre-trained LMs in general and enhancetheir few-shot capabilities.",,arXiv,['cs.cl'],, robust retrieval augmented generation for zeroshot slot filling,"['Michael Glass', 'Gaetano Rossiello', 'Md Faisal Mahbub Chowdhury', 'Alfio Gliozzo']",http://arxiv.org/pdf/2108.13934v2.pdf,2021-08-31,," Automatically inducing high quality knowledge graphs from a given collectionof documents still remains a challenging problem in AI. One way to make headwayfor this problem is through advancements in a related task known as slotfilling. In this task, given an entity query in form of [Entity, Slot, ?], asystem is asked to fill the slot by generating or extracting the missing valueexploiting evidence extracted from relevant passage(s) in the given documentcollection. The recent works in the field try to solve this task in anend-to-end fashion using retrieval-based language models. In this paper, wepresent a novel approach to zero-shot slot filling that extends dense passageretrieval with hard negatives and robust training procedures for retrievalaugmented generation models. Our model reports large improvements on both T-RExand zsRE slot filling datasets, improving both passage retrieval and slot valuegeneration, and ranking at the top-1 position in the KILT leaderboard.Moreover, we demonstrate the robustness of our system showing its domainadaptation capability on a new variant of the TACRED dataset for slot filling,through a combination of zero/few-shot learning. We release the source code andpre-trained models.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir']",, raft a realworld fewshot text classification benchmark,"['Neel Alex', 'Eli Lifland', 'Lewis Tunstall', 'Abhishek Thakur', 'Pegah Maham', 'C. Jess Riedel', 'Emmie Hine', 'Carolyn Ashurst', 'Paul Sedille', 'Alexis Carlier', 'Michael Noetel', 'Andreas Stuhlmüller']",http://arxiv.org/pdf/2109.14076v3.pdf,2021-09-28,," Large pre-trained language models have shown promise for few-shot learning,completing text-based tasks given only a few task-specific examples. Willmodels soon solve classification tasks that have so far been reserved for humanresearch assistants? Existing benchmarks are not designed to measure progressin applied settings, and so don't directly answer this question. The RAFTbenchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurringtasks and uses an evaluation setup that mirrors deployment. Baselineevaluations on RAFT reveal areas current techniques struggle with: reasoningover long texts and tasks with many classes. Human baselines show that someclassification tasks are difficult for non-expert humans, reflecting thatreal-world value sometimes depends on domain expertise. Yet even non-experthuman baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasetsand leaderboard will track which model improvements translate into real-worldbenefits at https://raft.elicit.org .",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, braininspired globallocal learning incorporated with neuromorphic computing,"['Yujie Wu', 'Rong Zhao', 'Jun Zhu', 'Feng Chen', 'Mingkun Xu', 'Guoqi Li', 'Sen Song', 'Lei Deng', 'Guanrui Wang', 'Hao Zheng', 'Jing Pei', 'Youhui Zhang', 'Mingguo Zhao', 'Luping Shi']",http://arxiv.org/pdf/2006.03226v3.pdf,2020-06-05,," Two main routes of learning methods exist at present including error-drivenglobal learning and neuroscience-oriented local learning. Integrating them intoone network may provide complementary learning capabilities for versatilelearning scenarios. At the same time, neuromorphic computing holds greatpromise, but still needs plenty of useful algorithms and algorithm-hardwareco-designs for exploiting the advantages. Here, we report a neuromorphic hybridlearning model by introducing a brain-inspired meta-learning paradigm and adifferentiable spiking model incorporating neuronal dynamics and synapticplasticity. It can meta-learn local plasticity and receive top-down supervisioninformation for multiscale synergic learning. We demonstrate the advantages ofthis model in multiple different tasks, including few-shot learning, continuallearning, and fault-tolerance learning in neuromorphic vision sensors. Itachieves significantly higher performance than single-learning methods, andshows promise in empowering neuromorphic applications revolution. We furtherimplemented the hybrid model in the Tianjic neuromorphic platform by exploitingalgorithm-hardware co-designs and proved that the model can fully utilizeneuromorphic many-core architecture to develop hybrid computation paradigm.",,arXiv,"['cs.ne', 'cs.ai', 'q-bio.nc']",, direct multimodal fewshot learning of speech and images,"['Leanne Nortje', 'Herman Kamper']",http://arxiv.org/pdf/2012.05680v2.pdf,2020-12-10,," We propose direct multimodal few-shot models that learn a shared embeddingspace of spoken words and images from only a few paired examples. Imagine anagent is shown an image along with a spoken word describing the object in thepicture, e.g. pen, book and eraser. After observing a few paired examples ofeach class, the model is asked to identify the ""book"" in a set of unseenpictures. Previous work used a two-step indirect approach relying on learnedunimodal representations: speech-speech and image-image comparisons areperformed across the support set of given speech-image pairs. We propose twodirect models which instead learn a single multimodal space where inputs fromdifferent modalities are directly comparable: a multimodal triplet network(MTriplet) and a multimodal correspondence autoencoder (MCAE). To train thesedirect models, we mine speech-image pairs: the support set is used to pair upunlabelled in-domain speech and images. In a speech-to-image digit matchingtask, direct models outperform indirect models, with the MTriplet achieving thebest multimodal five-shot accuracy. We show that the improvements are due tothe combination of unsupervised and transfer learning in the direct models, andthe absence of two-step compounding errors.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, spirit distillation precise realtime semantic segmentation of road scenes with insufficient data,"['Zhiyuan Wu', 'Yu Jiang', 'Chupeng Cui', 'Zongmin Yang', 'Xinhui Xue', 'Hong Qi']",http://arxiv.org/pdf/2103.13733v2.pdf,2021-03-25,," Semantic segmentation of road scenes is one of the key technologies forrealizing autonomous driving scene perception, and the effectiveness of deepConvolutional Neural Networks(CNNs) for this task has been demonstrated.State-of-art CNNs for semantic segmentation suffer from excessive computationsas well as large-scale training data requirement. Inspired by the ideas ofFine-tuning-based Transfer Learning (FTT) and feature-based knowledgedistillation, we propose a new knowledge distillation method for cross-domainknowledge transference and efficient data-insufficient network training, namedSpirit Distillation(SD), which allow the student network to mimic the teachernetwork to extract general features, so that a compact and accurate studentnetwork can be trained for real-time semantic segmentation of road scenes.Then, in order to further alleviate the trouble of insufficient data andimprove the robustness of the student, an Enhanced Spirit Distillation (ESD)method is proposed, which commits to exploit a more comprehensive generalfeatures extraction capability by considering images from both the target andthe proximity domains as input. To our knowledge, this paper is a pioneeringwork on the application of knowledge distillation to few-shot learning.Persuasive experiments conducted on Cityscapes semantic segmentation with theprior knowledge transferred from COCO2017 and KITTI demonstrate that ourmethods can train a better student network (mIOU and high-precision accuracyboost by 1.4% and 8.2% respectively, with 78.2% segmentation variance) withonly 41.8% FLOPs (see Fig. 1).",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, modelling latent translations for crosslingual transfer,"['Edoardo Maria Ponti', 'Julia Kreutzer', 'Ivan Vulić', 'Siva Reddy']",http://arxiv.org/pdf/2107.11353v1.pdf,2021-07-23,," While achieving state-of-the-art results in multiple tasks and languages,translation-based cross-lingual transfer is often overlooked in favour ofmassively multilingual pre-trained encoders. Arguably, this is due to its mainlimitations: 1) translation errors percolating to the classification phase and2) the insufficient expressiveness of the maximum-likelihood translation. Toremedy this, we propose a new technique that integrates both steps of thetraditional pipeline (translation and classification) into a single model, bytreating the intermediate translations as a latent random variable. As aresult, 1) the neural machine translation system can be fine-tuned with avariant of Minimum Risk Training where the reward is the accuracy of thedownstream task classifier. Moreover, 2) multiple samples can be drawn toapproximate the expected loss across all possible translations duringinference. We evaluate our novel latent translation-based model on a series ofmultilingual NLU tasks, including commonsense reasoning, paraphraseidentification, and natural language inference. We report gains for bothzero-shot and few-shot learning setups, up to 2.7 accuracy points on average,which are even more prominent for low-resource languages (e.g., HaitianCreole). Finally, we carry out in-depth analyses comparing different underlyingNMT models and assessing the impact of alternative translations on thedownstream performance.",,arXiv,['cs.cl'],, prototransformer a metalearning approach to providing student feedback,"['Mike Wu', 'Noah Goodman', 'Chris Piech', 'Chelsea Finn']",http://arxiv.org/pdf/2107.14035v2.pdf,2021-07-23,," High-quality computer science education is limited by the difficulty ofproviding instructor feedback to students at scale. While this feedback couldin principle be automated, supervised approaches to predicting the correctfeedback are bottlenecked by the intractability of annotating large quantitiesof student code. In this paper, we instead frame the problem of providingfeedback as few-shot classification, where a meta-learner adapts to givefeedback to student code on a new programming question from just a few examplesannotated by instructors. Because data for meta-training is limited, we proposea number of amendments to the typical few-shot learning framework, includingtask augmentation to create synthetic tasks, and additional side information tobuild stronger priors about each task. These additions are combined with atransformer architecture to embed discrete sequences (e.g. code) to aprototypical representation of a feedback class label. On a suite of few-shotnatural language processing tasks, we match or outperform state-of-the-artperformance. Then, on a collection of student solutions to exam questions froman introductory university course, we show that our approach reaches an averageprecision of 88% on unseen questions, surpassing the 82% precision of teachingassistants. Our approach was successfully deployed to deliver feedback to16,000 student exam-solutions in a programming course offered by a tier 1university. This is, to the best of our knowledge, the first successfuldeployment of a machine learning based feedback to open-ended student code.",,arXiv,"['cs.cy', 'cs.lg']",, lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5,"['Chengwei Qin', 'Shafiq Joty']",http://arxiv.org/pdf/2110.07298v3.pdf,2021-10-14,," Existing approaches to lifelong language learning rely on plenty of labeleddata for learning a new task, which is hard to obtain in most real scenarios.Considering that humans can continually learn new tasks from a handful ofexamples, we expect the models also to be able to generalize well on newfew-shot tasks without forgetting the previous ones. In this work, we definethis more challenging yet practical problem as Lifelong Few-shot LanguageLearning (LFLL) and propose a unified framework for it based on prompt tuningof T5. Our framework called LFPT5 takes full advantage of PT's strong few-shotlearning ability, and simultaneously trains the model as a task solver and adata generator. Before learning a new domain of the same task type, LFPT5generates pseudo (labeled) samples of previously learned domains, and latergets trained on those samples to alleviate forgetting of previous knowledge asit learns the new domain. In addition, a KL divergence loss is minimized toachieve label consistency between the previous and the current model. Whileadapting to a new task type, LFPT5 includes and tunes additional promptembeddings for the new task. With extensive experiments, we demonstrate thatLFPT5 can be applied to various different types of tasks and significantlyoutperform previous methods in different LFLL settings.",,arXiv,['cs.cl'],, metaicl learning to learn in context,"['Sewon Min', 'Mike Lewis', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2110.15943v2.pdf,2021-10-29,," We introduce MetaICL (Meta-training for In-Context Learning), a newmeta-training framework for few-shot learning where a pretrained language modelis tuned to do in-context learning on a large set of training tasks. Thismeta-training enables the model to more effectively learn a new task in contextat test time, by simply conditioning on a few training examples with noparameter updates or task-specific templates. We experiment on a large, diversecollection of tasks consisting of 142 NLP datasets including classification,question answering, natural language inference, paraphrase detection and more,across seven different meta-training/target splits. MetaICL outperforms a rangeof baselines including in-context learning without meta-training and multi-tasklearning followed by zero-shot transfer. We find that the gains areparticularly significant for target tasks that have domain shifts from themeta-training tasks, and that using a diverse set of the meta-training tasks iskey to improvements. We also show that MetaICL approaches (and sometimes beats)the performance of models fully finetuned on the target task, and outperformsmuch bigger models with nearly 8x parameters. Finally, we show that MetaICL iscomplementary to human-written instructions, and the best performance can beachieved by combining both approaches.",,arXiv,"['cs.cl', 'cs.ai']",, scaling asr improves zero and few shot learning,"['Alex Xiao', 'Weiyi Zheng', 'Gil Keren', 'Duc Le', 'Frank Zhang', 'Christian Fuegen', 'Ozlem Kalinli', 'Yatharth Saraf', 'Abdelrahman Mohamed']",http://arxiv.org/pdf/2111.05948v3.pdf,2021-11-10,," With 4.5 million hours of English speech from 10 different sources across 120countries and models of up to 10 billion parameters, we explore the frontiersof scale for automatic speech recognition. We propose data selection techniquesto efficiently scale training data to find the most valuable samples in massivedatasets. To efficiently scale model sizes, we leverage various optimizationssuch as sparse transducer loss and model sharding. By training 1-10B parameteruniversal English ASR models, we push the limits of speech recognitionperformance across many domains. Furthermore, our models learn powerful speechrepresentations with zero and few-shot capabilities on novel domains and stylesof speech, exceeding previous results across multiple in-house and publicbenchmarks. For speakers with disorders due to brain damage, our best zero-shotand few-shot models achieve 22% and 60% relative improvement on the AphasiaBanktest set, respectively, while realizing the best performance on public socialmedia videos. Furthermore, the same universal model reaches equivalentperformance with 500x less in-domain data on the SPGISpeech financial-domaindataset.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, pointclip point cloud understanding by clip,"['Renrui Zhang', 'Ziyu Guo', 'Wei Zhang', 'Kunchang Li', 'Xupeng Miao', 'Bin Cui', 'Yu Qiao', 'Peng Gao', 'Hongsheng Li']",http://arxiv.org/pdf/2112.02413v1.pdf,2021-12-04,," Recently, zero-shot and few-shot learning via Contrastive Vision-LanguagePre-training (CLIP) have shown inspirational performance on 2D visualrecognition, which learns to match images with their corresponding texts inopen-vocabulary settings. However, it remains under explored that whether CLIP,pre-trained by large-scale image-text pairs in 2D, can be generalized to 3Drecognition. In this paper, we identify such a setting is feasible by proposingPointCLIP, which conducts alignment between CLIP-encoded point cloud and 3Dcategory texts. Specifically, we encode a point cloud by projecting it intomulti-view depth maps without rendering, and aggregate the view-wise zero-shotprediction to achieve knowledge transfer from 2D to 3D. On top of that, wedesign an inter-view adapter to better extract the global feature andadaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in2D. By just fine-tuning the lightweight adapter in the few-shot settings, theperformance of PointCLIP could be largely improved. In addition, we observe thecomplementary property between PointCLIP and classical 3D-supervised networks.By simple ensembling, PointCLIP boosts baseline's performance and evensurpasses state-of-the-art models. Therefore, PointCLIP is a promisingalternative for effective 3D point cloud understanding via CLIP under lowresource cost and data regime. We conduct thorough experiments onwidely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN todemonstrate the effectiveness of PointCLIP. The code is released athttps://github.com/ZrrSkywalker/PointCLIP.",,arXiv,"['cs.cv', 'cs.ai', 'cs.ro']",, "visionlanguage intelligence tasks, representation learning, and large models","['Feng Li', 'Hao Zhang', 'Yi-Fan Zhang', 'Shilong Liu', 'Jian Guo', 'Lionel M. Ni', 'PengChuan Zhang', 'Lei Zhang']",http://arxiv.org/pdf/2203.01922v1.pdf,2022-03-03,," This paper presents a comprehensive survey of vision-language (VL)intelligence from the perspective of time. This survey is inspired by theremarkable progress in both computer vision and natural language processing,and recent trends shifting from single modality processing to multiple modalitycomprehension. We summarize the development in this field into three timeperiods, namely task-specific methods, vision-language pre-training (VLP)methods, and larger models empowered by large-scale weakly-labeled data. Wefirst take some common VL tasks as examples to introduce the development oftask-specific methods. Then we focus on VLP methods and comprehensively reviewkey components of the model structures and training methods. After that, weshow how recent work utilizes large-scale raw image-text data to learnlanguage-aligned visual representations that generalize better on zero or fewshot learning tasks. Finally, we discuss some potential future trends towardsmodality cooperation, unified representation, and knowledge incorporation. Webelieve that this review will be of help for researchers and practitioners ofAI and ML, especially those interested in computer vision and natural languageprocessing.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, rethinking task sampling for fewshot visionlanguage transfer learning,"['Zhenhailong Wang', 'Hang Yu', 'Manling Li', 'Han Zhao', 'Heng Ji']",http://arxiv.org/pdf/2203.04904v3.pdf,2022-03-09,," Despite achieving state-of-the-art zero-shot performance, existingvision-language models still fall short of few-shot transfer ability ondomain-specific problems. Classical fine-tuning often fails to prevent highlyexpressive models from exploiting spurious correlations. Althoughmodel-agnostic meta-learning (MAML) presents as a natural alternative forfew-shot transfer learning, the expensive computation due to implicitsecond-order optimization limits its use on large-scale vision-language modelssuch as CLIP. While much literature has been devoted to exploring alternativeoptimization strategies, we identify another essential aspect towards effectivefew-shot transfer learning, task sampling, which is previously only be viewedas part of data pre-processing in MAML. To show the impact of task sampling, wepropose a simple algorithm, Model-Agnostic Multitask Fine-tuning (MAMF), whichdifferentiates classical fine-tuning only on uniformly sampling multiple tasks.Despite its simplicity, we show that MAMF consistently outperforms classicalfine-tuning on five few-shot vision-language classification tasks. We furthershow that the effectiveness of the bi-level optimization in MAML is highlysensitive to the zero-shot performance of a task in the context of few-shotvision-language classification. The goal of this paper is to provide newinsights on what makes few-shot learning work, and encourage more research intoinvestigating better task sampling strategies.",,arXiv,"['cs.mm', 'cs.cl', 'cs.cv']",, mgpt fewshot learners go multilingual,"['Oleh Shliazhko', 'Alena Fenogenova', 'Maria Tikhonova', 'Vladislav Mikhailov', 'Anastasia Kozlova', 'Tatiana Shavrina']",http://arxiv.org/pdf/2204.07580v2.pdf,2022-04-15,," Recent studies report that autoregressive language models can successfullysolve many NLP tasks via zero- and few-shot learning paradigms, which opens upnew possibilities for using the pre-trained language models. This paperintroduces two autoregressive GPT-like models with 1.3 billion and 13 billionparameters trained on 60 languages from 25 language families using Wikipediaand Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture usingGPT-2 sources and the sparse attention mechanism; Deepspeed and Megatronframeworks allow us to parallelize the training and inference stepseffectively. The resulting models show performance on par with the recentlyreleased XGLM models by Facebook, covering more languages and enhancing NLPpossibilities for low resource languages of CIS countries and Russian smallnations. We detail the motivation for the choices of the architecture design,thoroughly describe the data preparation pipeline, and train five smallversions of the model to choose the most optimal multilingual tokenizationstrategy. We measure the model perplexity in all covered languages and evaluateit on the wide spectre of multilingual tasks, including classification,generative, sequence labeling and knowledge probing. The models were evaluatedwith the zero-shot and few-shot methods. Furthermore, we compared theclassification tasks with the state-of-the-art multilingual model XGLM. sourcecode and the mGPT XL model are publicly released.",,arXiv,"['cs.cl', 'cs.ai', '68-06, 68-04, 68t50, 68t01', 'i.2; i.2.7']",, opt open pretrained transformer language models,"['Susan Zhang', 'Stephen Roller', 'Naman Goyal', 'Mikel Artetxe', 'Moya Chen', 'Shuohui Chen', 'Christopher Dewan', 'Mona Diab', 'Xian Li', 'Xi Victoria Lin', 'Todor Mihaylov', 'Myle Ott', 'Sam Shleifer', 'Kurt Shuster', 'Daniel Simig', 'Punit Singh Koura', 'Anjali Sridhar', 'Tianlu Wang', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2205.01068v4.pdf,2022-05-02,," Large language models, which are often trained for hundreds of thousands ofcompute days, have shown remarkable capabilities for zero- and few-shotlearning. Given their computational cost, these models are difficult toreplicate without significant capital. For the few that are available throughAPIs, no access is granted to the full model weights, making them difficult tostudy. We present Open Pre-trained Transformers (OPT), a suite of decoder-onlypre-trained transformers ranging from 125M to 175B parameters, which we aim tofully and responsibly share with interested researchers. We show that OPT-175Bis comparable to GPT-3, while requiring only 1/7th the carbon footprint todevelop. We are also releasing our logbook detailing the infrastructurechallenges we faced, along with code for experimenting with all of the releasedmodels.",,arXiv,"['cs.cl', 'cs.lg']",, relation extraction as openbook examination retrievalenhanced prompt tuning,"['Xiang Chen', 'Lei Li', 'Ningyu Zhang', 'Chuanqi Tan', 'Fei Huang', 'Luo Si', 'Huajun Chen']",http://arxiv.org/pdf/2205.02355v2.pdf,2022-05-04,," Pre-trained language models have contributed significantly to relationextraction by demonstrating remarkable few-shot learning abilities. However,prompt tuning methods for relation extraction may still fail to generalize tothose rare or hard patterns. Note that the previous parametric learningparadigm can be viewed as memorization regarding training data as a book andinference as the close-book test. Those long-tailed or hard patterns can hardlybe memorized in parameters given few-shot instances. To this end, we regard REas an open-book examination and propose a new semiparametric paradigm ofretrieval-enhanced prompt tuning for relation extraction. We construct anopen-book datastore for retrieval regarding prompt-based instancerepresentations and corresponding relation labels as memorized key-value pairs.During inference, the model can infer relations by linearly interpolating thebase output of PLM with the non-parametric nearest neighbor distribution overthe datastore. In this way, our model not only infers relation throughknowledge stored in the weights during training but also assistsdecision-making by unwinding and querying examples in the open-book datastore.Extensive experiments on benchmark datasets show that our method can achievestate-of-the-art in both standard supervised and few-shot settings. Code areavailable in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, towards unified prompt tuning for fewshot text classification,"['Jianing Wang', 'Chengyu Wang', 'Fuli Luo', 'Chuanqi Tan', 'Minghui Qiu', 'Fei Yang', 'Qiuhui Shi', 'Songfang Huang', 'Ming Gao']",http://arxiv.org/pdf/2205.05313v1.pdf,2022-05-11,," Prompt-based fine-tuning has boosted the performance of Pre-trained LanguageModels (PLMs) on few-shot text classification by employing task-specificprompts. Yet, PLMs are unfamiliar with prompt-style expressions duringpre-training, which limits the few-shot learning performance on downstreamtasks. It would be desirable if the models can acquire some prompting knowledgebefore adaptation to specific NLP tasks. We present the Unified Prompt Tuning(UPT) framework, leading to better few-shot text classification for BERT-stylemodels by explicitly capturing prompting semantics from non-target NLPdatasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed forjoint prompt learning across different NLP tasks, forcing PLMs to capturetask-invariant prompting knowledge. We further design a self-supervised tasknamed Knowledge-enhanced Selective Masked Language Modeling to improve thePLM's generalization abilities for accurate adaptation to previously unseentasks. After multi-task learning across multiple tasks, the PLM can be betterprompt-tuned towards any dissimilar target tasks in low-resourced settings.Experiments over a variety of NLP tasks show that UPT consistently outperformsstate-of-the-arts for prompt-based fine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, towards answering openended ethical quandary questions,"['Yejin Bang', 'Nayeon Lee', 'Tiezheng Yu', 'Leila Khalatbari', 'Yan Xu', 'Samuel Cahyawijaya', 'Dan Su', 'Bryan Wilie', 'Romain Barraud', 'Elham J. Barezi', 'Andrea Madotto', 'Hayden Kee', 'Pascale Fung']",http://arxiv.org/pdf/2205.05989v3.pdf,2022-05-12,," Considerable advancements have been made in various NLP tasks based on theimpressive power of large language models (LLMs) and many NLP applications aredeployed in our daily lives. In this work, we challenge the capability of LLMswith the new task of Ethical Quandary Generative Question Answering. Ethicalquandary questions are more challenging to address because multiple conflictinganswers may exist to a single quandary. We explore the current capability ofLLMs in providing an answer with a deliberative exchange of differentperspectives to an ethical quandary, in the approach of Socratic philosophy,instead of providing a closed answer like an oracle. We propose a model thatsearches for different ethical principles applicable to the ethical quandaryand generates an answer conditioned on the chosen principles throughprompt-based few-shot learning. We also discuss the remaining challenges andethical issues involved in this task and suggest the direction towarddeveloping responsible NLP systems by incorporating human values explicitly.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, promptda labelguided data augmentation for promptbased fewshot learners,"['Canyu Chen', 'Kai Shu']",http://arxiv.org/pdf/2205.09229v3.pdf,2022-05-18,," Recent advances in large pre-trained language models (PLMs) lead toimpressive gains in natural language understanding (NLU) tasks withtask-specific fine-tuning. However, directly fine-tuning PLMs heavily relies onsufficient labeled training instances, which are usually hard to obtain.Prompt-based tuning on PLMs has shown to be powerful for various downstreamfew-shot tasks. Existing works studying prompt-based tuning for few-shot NLUtasks mainly focus on deriving proper label words with a verbalizer orgenerating prompt templates to elicit semantics from PLMs. In addition,conventional data augmentation strategies such as synonym substitution, thoughwidely adopted in low-resource scenarios, only bring marginal improvements forprompt-based few-shot learning. Thus, an important research question arises:how to design effective data augmentation methods for prompt-based few-shottuning? To this end, considering the label semantics are essential inprompt-based tuning, we propose a novel label-guided data augmentationframework PromptDA, which exploits the enriched label semantic information fordata augmentation. Extensive experiment results on few-shot text classificationtasks demonstrate the superior performance of the proposed framework byeffectively leveraging label semantics and data augmentation for naturallanguage understanding. Our code is available athttps://github.com/canyuchen/PromptDA.",,arXiv,"['cs.cl', 'cs.ai']",, what makes datatotext generation hard for pretrained language models,"['Moniba Keymanesh', 'Adrian Benton', 'Mark Dredze']",http://arxiv.org/pdf/2205.11505v1.pdf,2022-05-23,," Expressing natural language descriptions of structured facts or relations --data-to-text generation (D2T) -- increases the accessibility of structuredknowledge repositories. Previous work shows that pre-trained languagemodels(PLMs) perform remarkably well on this task after fine-tuning on asignificant amount of task-specific training data. On the other hand, whileauto-regressive PLMs can generalize from a few task examples, their efficacy atD2T is largely unexplored. Furthermore, we have an incomplete understanding ofthe limits of PLMs on D2T. In this work, we conduct an empirical study of both fine-tuned andauto-regressive PLMs on the DART multi-domain D2T dataset. We consider theirperformance as a function of the amount of task-specific data and how thesedata are incorporated into the models: zero and few-shot learning, andfine-tuning of model weights. In addition, we probe the limits of PLMs bymeasuring performance on subsets of the evaluation data: novel predicates andabstractive test examples. To improve the performance on these subsets, weinvestigate two techniques: providing predicate descriptions in the context andre-ranking generated candidates by information reflected in the source.Finally, we conduct a human evaluation of model errors and show that D2Tgeneration tasks would benefit from datasets with more careful manual curation.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, attempt parameterefficient multitask tuning via attentional mixtures of soft prompts,"['Akari Asai', 'Mohammadreza Salehi', 'Matthew E. Peters', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2205.11961v2.pdf,2022-05-24,," This work introduces a new multi-task, parameter-efficient language model(LM) tuning method that learns to transfer knowledge across different tasks viaa mixture of soft prompts-small prefix embedding vectors pre-trained fordifferent tasks. Our method, called ATTEMPT (ATTEntional Mixtures of PromptTuning), obtains source prompts as encodings of large-scale source tasks into asmall number of parameters and trains an attention module to interpolate thesource prompts and a newly initialized target prompt for every instance in thetarget task. During training, only the target task prompt and the attentionweights, which are shared between tasks in multi-task training, are updated,while the original LM and source prompts are intact. ATTEMPT is highlyparameter-efficient (e.g., updates 2,300 times fewer parameters than fullfine-tuning) while achieving high task performance using knowledge fromhigh-resource tasks. Moreover, it is modular using pre-trained soft prompts,and can flexibly add or remove source prompts for effective knowledge transfer.Our experimental results across 21 diverse NLP datasets show that ATTEMPTsignificantly outperforms prompt tuning and outperforms or matches fullyfine-tuned or other parameter-efficient tuning approaches that use over tentimes more parameters. Finally, ATTEMPT outperforms previous work in few-shotlearning settings.",,arXiv,['cs.cl'],, making large language models better reasoners with stepaware verifier,"['Yifei Li', 'Zeqi Lin', 'Shizhuo Zhang', 'Qiang Fu', 'Bei Chen', 'Jian-Guang Lou', 'Weizhu Chen']",http://arxiv.org/pdf/2206.02336v3.pdf,2022-06-06,," Few-shot learning is a challenging task that requires language models togeneralize from limited examples. Large language models like GPT-3 and PaLMhave made impressive progress in this area, but they still face difficulties inreasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improvetheir reasoning skills, previous work has proposed to guide the language modelwith prompts that elicit a series of reasoning steps before giving the finalanswer, achieving a significant improvement on GSM8K from 17.9% to 58.1% inproblem-solving rate. In this paper, we present DIVERSE (Diverse Verifier onReasoning Step), a novel approach that further enhances the reasoningcapability of language models. DIVERSE has three main components: first, itgenerates diverse prompts to explore different reasoning paths for the samequestion; second, it uses a verifier to filter out incorrect answers based on aweighted voting scheme; and third, it verifies each reasoning step individuallyinstead of the whole chain. We evaluate DIVERSE on the latest language modelcode-davinci-002 and show that it achieves new state-of-the-art results on sixof eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).",,arXiv,"['cs.cl', 'cs.ai']",, language models are generalpurpose interfaces,"['Yaru Hao', 'Haoyu Song', 'Li Dong', 'Shaohan Huang', 'Zewen Chi', 'Wenhui Wang', 'Shuming Ma', 'Furu Wei']",http://arxiv.org/pdf/2206.06336v1.pdf,2022-06-13,," Foundation models have received much attention due to their effectivenessacross a broad range of downstream applications. Though there is a bigconvergence in terms of architecture, most pretrained models are typicallystill developed for specific tasks or modalities. In this work, we propose touse language models as a general-purpose interface to various foundationmodels. A collection of pretrained encoders perceive diverse modalities (suchas vision, and language), and they dock with a language model that plays therole of a universal task layer. We propose a semi-causal language modelingobjective to jointly pretrain the interface and the modular encoders. Wesubsume the advantages and capabilities from both causal and non-causalmodeling, thereby combining the best of two worlds. Specifically, the proposedmethod not only inherits the capabilities of in-context learning and open-endedgeneration from causal language modeling, but also is conducive to finetuningbecause of the bidirectional encoders. More importantly, our approachseamlessly unlocks the combinations of the above capabilities, e.g., enablingin-context learning or instruction following with finetuned encoders.Experimental results across various language-only and vision-languagebenchmarks show that our model outperforms or is competitive with specializedmodels on finetuning, zero-shot generalization, and few-shot learning.",,arXiv,['cs.cl'],, fit parameter efficient fewshot transfer learning for personalized and federated image classification,"['Aliaksandra Shysheya', 'John Bronskill', 'Massimiliano Patacchiola', 'Sebastian Nowozin', 'Richard E Turner']",http://arxiv.org/pdf/2206.08671v2.pdf,2022-06-17,," Modern deep learning systems are increasingly deployed in situations such aspersonalization and federated learning where it is necessary to support i)learning on small amounts of data, and ii) communication efficient distributedtraining protocols. In this work, we develop FiLM Transfer (FiT) which fulfillsthese requirements in the image classification setting by combining ideas fromtransfer learning (fixed pretrained backbones and fine-tuned FiLM adapterlayers) and meta-learning (automatically configured Naive Bayes classifiers andepisodic training) to yield parameter efficient models with superiorclassification accuracy at low-shot. The resulting parameter efficiency is keyfor enabling few-shot learning, inexpensive model updates for personalization,and communication efficient federated learning. We experiment with FiT on awide range of downstream datasets and show that it achieves betterclassification accuracy than the leading Big Transfer (BiT) algorithm atlow-shot and achieves state-of-the art accuracy on the challenging VTAB-1kbenchmark, with fewer than 1% of the updateable parameters. Finally, wedemonstrate the parameter efficiency and superior accuracy of FiT indistributed low-shot applications including model personalization and federatedlearning where model update size is an important performance metric.",,arXiv,"['stat.ml', 'cs.cv', 'cs.lg']",, a reinforcement learningbased offensive semantics censorship system for chatbots,"['Shaokang Cai', 'Dezhi Han', 'Zibin Zheng', 'Dun Li', ' NoelCrespi']",http://arxiv.org/pdf/2207.10569v1.pdf,2022-07-13,," The rapid development of artificial intelligence (AI) technology has enabledlarge-scale AI applications to land in the market and practice. However, whileAI technology has brought many conveniences to people in the productizationprocess, it has also exposed many security issues. Especially, attacks againstonline learning vulnerabilities of chatbots occur frequently. Therefore, thispaper proposes a semantics censorship chatbot system based on reinforcementlearning, which is mainly composed of two parts: the Offensive semanticscensorship model and the semantics purification model. Offensive semanticsreview can combine the context of user input sentences to detect the rapidevolution of Offensive semantics and respond to Offensive semantics responses.The semantics purification model For the case of chatting robot models, it hasbeen contaminated by large numbers of offensive semantics, by strengthening theoffensive reply learned by the learning algorithm, rather than rolling back tothe early versions. In addition, by integrating a once-through learningapproach, the speed of semantics purification is accelerated while reducing theimpact on the quality of replies. The experimental results show that ourproposed approach reduces the probability of the chat model generatingoffensive replies and that the integration of the few-shot learning algorithmimproves the training speed rapidly while effectively slowing down the declinein BLEU values.",,arXiv,['cs.cl'],, alexatm 20b fewshot learning using a largescale multilingual seq2seq model,"['Saleh Soltan', 'Shankar Ananthakrishnan', 'Jack FitzGerald', 'Rahul Gupta', 'Wael Hamza', 'Haidar Khan', 'Charith Peris', 'Stephen Rawls', 'Andy Rosenbaum', 'Anna Rumshisky', 'Chandana Satya Prakash', 'Mukund Sridhar', 'Fabian Triefenbach', 'Apurv Verma', 'Gokhan Tur', 'Prem Natarajan']",http://arxiv.org/pdf/2208.01448v2.pdf,2022-08-02,," In this work, we demonstrate that multilingual large-scalesequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoisingand Causal Language Modeling (CLM) tasks, are more efficient few-shot learnersthan decoder-only models on various tasks. In particular, we train a 20 billionparameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B)and show that it achieves state-of-the-art (SOTA) performance on 1-shotsummarization tasks, outperforming a much larger 540B PaLM decoder model.AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially forlow-resource languages, across almost all language pairs supported by the model(Arabic, English, French, German, Hindi, Italian, Japanese, Marathi,Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show inzero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2datasets and provides SOTA performance on multilingual tasks such as XNLI,XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling casefor seq2seq models as a powerful alternative to decoder-only models forLarge-scale Language Model (LLM) training.",,arXiv,"['cs.cl', 'cs.lg']",, unsupervisedly prompting alphafold2 for fewshot learning of accurate folding landscape and protein structure prediction,"['Jun Zhang', 'Sirui Liu', 'Mengyun Chen', 'Haotian Chu', 'Min Wang', 'Zidong Wang', 'Jialiang Yu', 'Ningxi Ni', 'Fan Yu', 'Diqing Chen', 'Yi Isaac Yang', 'Boxin Xue', 'Lijiang Yang', 'Yuan Liu', 'Yi Qin Gao']",http://arxiv.org/pdf/2208.09652v2.pdf,2022-08-20,," Data-driven predictive methods which can efficiently and accurately transformprotein sequences into biologically active structures are highly valuable forscientific research and medical development. Determining accurate foldinglandscape using co-evolutionary information is fundamental to the success ofmodern protein structure prediction methods. As the state of the art,AlphaFold2 has dramatically raised the accuracy without performing explicitco-evolutionary analysis. Nevertheless, its performance still shows strongdependence on available sequence homologs. Based on the interrogation on thecause of such dependence, we presented EvoGen, a meta generative model, toremedy the underperformance of AlphaFold2 for poor MSA targets. By promptingthe model with calibrated or virtually generated homologue sequences, EvoGenhelps AlphaFold2 fold accurately in low-data regime and even achieveencouraging performance with single-sequence predictions. Being able to makeaccurate predictions with few-shot MSA not only generalizes AlphaFold2 betterfor orphan sequences, but also democratizes its use for high-throughputapplications. Besides, EvoGen combined with AlphaFold2 yields a probabilisticstructure generation method which could explore alternative conformations ofprotein sequences, and the task-aware differentiable algorithm for sequencegeneration will benefit other related tasks including protein design.",,arXiv,"['cs.lg', 'cs.ai', 'physics.bio-ph']",, disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective,"['Jiangmeng Li', 'Yanan Zhang', 'Wenwen Qiang', 'Lingyu Si', 'Chengbo Jiao', 'Xiaohui Hu', 'Changwen Zheng', 'Fuchun Sun']",http://arxiv.org/pdf/2208.12681v2.pdf,2022-08-26,," Few-shot learning models learn representations with limited humanannotations, and such a learning paradigm demonstrates practicability invarious tasks, e.g., image classification, object detection, etc. However,few-shot object detection methods suffer from an intrinsic defect that thelimited training data makes the model cannot sufficiently explore semanticinformation. To tackle this, we introduce knowledge distillation to thefew-shot object detection learning paradigm. We further run a motivatingexperiment, which demonstrates that in the process of knowledge distillation,the empirical error of the teacher model degenerates the prediction performanceof the few-shot object detection model as the student. To understand thereasons behind this phenomenon, we revisit the learning paradigm of knowledgedistillation on the few-shot object detection task from the causal theoreticstandpoint, and accordingly, develop a Structural Causal Model. Following thetheoretical guidance, we propose a backdoor adjustment-based knowledgedistillation method for the few-shot object detection task, namely Disentangleand Remerge (D&R), to perform conditional causal intervention toward thecorresponding Structural Causal Model. Empirically, the experiments onbenchmarks demonstrate that D&R can yield significant performance boosts infew-shot object detection. Code is available athttps://github.com/ZYN-1101/DandR.git.",,arXiv,['cs.cv'],, neurips'22 crossdomain metadl competition design and baseline results,"['Dustin Carrión-Ojeda', 'Hong Chen', 'Adrian El Baz', 'Sergio Escalera', 'Chaoyu Guan', 'Isabelle Guyon', 'Ihsan Ullah', 'Xin Wang', 'Wenwu Zhu']",http://arxiv.org/pdf/2208.14686v1.pdf,2022-08-31,," We present the design and baseline results for a new challenge in theChaLearn meta-learning series, accepted at NeurIPS'22, focusing on""cross-domain"" meta-learning. Meta-learning aims to leverage experience gainedfrom previous tasks to solve new tasks efficiently (i.e., with betterperformance, little training data, and/or modest computational resources).While previous challenges in the series focused on within-domain few-shotlearning problems, with the aim of learning efficiently N-way k-shot tasks(i.e., N class classification problems with k training examples), thiscompetition challenges the participants to solve ""any-way"" and ""any-shot""problems drawn from various domains (healthcare, ecology, biology,manufacturing, and others), chosen for their humanitarian and societal impact.To that end, we created Meta-Album, a meta-dataset of 40 image classificationdatasets from 10 domains, from which we carve out tasks with any number of""ways"" (within the range 2-20) and any number of ""shots"" (within the range1-20). The competition is with code submission, fully blind-tested on theCodaLab challenge platform. The code of the winners will be open-sourced,enabling the deployment of automated machine learning solutions for few-shotimage classification across several domains.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ne']",, automatic label sequence generation for prompting sequencetosequence models,"['Zichun Yu', 'Tianyu Gao', 'Zhengyan Zhang', 'Yankai Lin', 'Zhiyuan Liu', 'Maosong Sun', 'Jie Zhou']",http://arxiv.org/pdf/2209.09401v1.pdf,2022-09-20,," Prompting, which casts downstream applications as language modeling tasks,has shown to be sample efficient compared to standard fine-tuning withpre-trained models. However, one pitfall of prompting is the need ofmanually-designed patterns, whose outcome can be unintuitive and requires largevalidation sets to tune. To tackle the challenge, we propose AutoSeq, a fullyautomatic prompting method: (1) We adopt natural language prompts onsequence-to-sequence models, enabling free-form generation and larger labelsearch space; (2) We propose label sequences -- phrases with indefinite lengthsto verbalize the labels -- which eliminate the need of manual templates and aremore expressive than single label words; (3) We use beam search toautomatically generate a large amount of label sequence candidates and proposecontrastive re-ranking to get the best combinations. AutoSeq significantlyoutperforms other no-manual-design methods, such as soft prompt tuning, adaptertuning, and automatic search on single label words; the generated labelsequences are even better than curated manual ones on a variety of tasks. Ourmethod reveals the potential of sequence-to-sequence models in few-shotlearning and sheds light on a path to generic and automatic prompting. Thesource code of this paper can be obtained fromhttps://github.com/thunlp/Seq2Seq-Prompt.",,arXiv,"['cs.cl', 'cs.lg']",, collaboration of pretrained models makes better fewshot learner,"['Renrui Zhang', 'Bohao Li', 'Wei Zhang', 'Hao Dong', 'Hongsheng Li', 'Peng Gao', 'Yu Qiao']",http://arxiv.org/pdf/2209.12255v2.pdf,2022-09-25,," Few-shot classification requires deep neural networks to learn generalizedrepresentations only from limited training images, which is challenging butsignificant in low-data regimes. Recently, CLIP-based methods have shownpromising few-shot performance benefited from the contrastive language-imagepre-training. Based on this point, we question if the large-scale pre-trainingcan alleviate the few-shot data deficiency and also assist the representationlearning by the pre-learned knowledge. In this paper, we propose CoMo, aCollaboration of pre-trained Models that incorporates diverse prior knowledgefrom various pre-training paradigms for better few-shot learning. Our CoMoincludes: CLIP's language-contrastive knowledge, DINO's vision-contrastiveknowledge, and DALL-E's language-generative knowledge. Specifically, CoMo worksin two aspects: few-shot data expansion and diverse knowledge ensemble. Forone, we generate synthetic images via zero-shot DALL-E to enrich the few-shottraining data without any manpower. For the other, we introduce a learnableMulti-Knowledge Adapter (MK-Adapter) to adaptively blend the predictions fromCLIP and DINO. By such collaboration, CoMo can fully unleash the potential ofdifferent pre-training methods and unify them to perform state-of-the-art forfew-shot classification. We conduct extensive experiments on 11 datasets todemonstrate the superiority and generalization ability of our approach.",,arXiv,['cs.cv'],, clip2point transfer clip to point cloud classification with imagedepth pretraining,"['Tianyu Huang', 'Bowen Dong', 'Yunhan Yang', 'Xiaoshui Huang', 'Rynson W. H. Lau', 'Wanli Ouyang', 'Wangmeng Zuo']",http://arxiv.org/pdf/2210.01055v3.pdf,2022-10-03,," Pre-training across 3D vision and language remains under development becauseof limited training data. Recent works attempt to transfer vision-languagepre-training models to 3D vision. PointCLIP converts point cloud data tomulti-view depth maps, adopting CLIP for shape classification. However, itsperformance is restricted by the domain gap between rendered depth maps andimages, as well as the diversity of depth distributions. To address this issue,we propose CLIP2Point, an image-depth pre-training method by contrastivelearning to transfer CLIP to the 3D domain, and adapt it to point cloudclassification. We introduce a new depth rendering setting that forms a bettervisual effect, and then render 52,460 pairs of images and depth maps fromShapeNet for pre-training. The pre-training scheme of CLIP2Point combinescross-modality learning to enforce the depth features for capturing expressivevisual and textual features and intra-modality learning to enhance theinvariance of depth aggregation. Additionally, we propose a novel Dual-PathAdapter (DPA) module, i.e., a dual-path structure with simplified adapters forfew-shot learning. The dual-path structure allows the joint use of CLIP andCLIP2Point, and the simplified adapter can well fit few-shot tasks withoutpost-search. Experimental results show that CLIP2Point is effective intransferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIPand other self-supervised 3D networks, achieving state-of-the-art results onzero-shot and few-shot classification.",,arXiv,['cs.cv'],, "rarr researching and revising what language models say, using language models","['Luyu Gao', 'Zhuyun Dai', 'Panupong Pasupat', 'Anthony Chen', 'Arun Tejasvi Chaganty', 'Yicheng Fan', 'Vincent Y. Zhao', 'Ni Lao', 'Hongrae Lee', 'Da-Cheng Juan', 'Kelvin Guu']",http://arxiv.org/pdf/2210.08726v3.pdf,2022-10-17,," Language models (LMs) now excel at many tasks such as few-shot learning,question answering, reasoning, and dialog. However, they sometimes generateunsupported or misleading content. A user cannot easily determine whether theiroutputs are trustworthy or not, because most LMs do not have any built-inmechanism for attribution to external evidence. To enable attribution whilestill preserving all the powerful advantages of recent generation models, wepropose RARR (Retrofit Attribution using Research and Revision), a system that1) automatically finds attribution for the output of any text generation modeland 2) post-edits the output to fix unsupported content while preserving theoriginal output as much as possible. When applied to the output of severalstate-of-the-art LMs on a diverse set of generation tasks, we find that RARRsignificantly improves attribution while otherwise preserving the originalinput to a much greater degree than previously explored edit models.Furthermore, the implementation of RARR requires only a handful of trainingexamples, a large language model, and standard web search.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, tape assessing fewshot russian language understanding,"['Ekaterina Taktasheva', 'Tatiana Shavrina', 'Alena Fenogenova', 'Denis Shevelev', 'Nadezhda Katricheva', 'Maria Tikhonova', 'Albina Akhmetgareeva', 'Oleg Zinkevich', 'Anastasiia Bashmakova', 'Svetlana Iordanskaia', 'Alena Spiridonova', 'Valentina Kurenshchikova', 'Ekaterina Artemova', 'Vladislav Mikhailov']",http://arxiv.org/pdf/2210.12813v1.pdf,2022-10-23,," Recent advances in zero-shot and few-shot learning have shown promise for ascope of research and practical purposes. However, this fast-growing area lacksstandardized evaluation suites for non-English languages, hindering progressoutside the Anglo-centric paradigm. To address this line of research, wepropose TAPE (Text Attack and Perturbation Evaluation), a novel benchmark thatincludes six more complex NLU tasks for Russian, covering multi-hop reasoning,ethical concepts, logic and commonsense knowledge. The TAPE's design focuses onsystematic zero-shot and few-shot NLU evaluation: (i) linguistic-orientedadversarial attacks and perturbations for analyzing robustness, and (ii)subpopulations for nuanced interpretation. The detailed analysis of testing theautoregressive baselines indicates that simple spelling-based perturbationsaffect the performance the most, while paraphrasing the input has a morenegligible effect. At the same time, the results demonstrate a significant gapbetween the neural and human baselines for most tasks. We publicly release TAPE(tape-benchmark.com) to foster research on robust LMs that can generalize tonew tasks when little to no supervision is available.",,arXiv,['cs.cl'],, learning new tasks from a few examples with softlabel prototypes,"['Avyav Kumar Singh', 'Ekaterina Shutova', 'Helen Yannakoudakis']",http://arxiv.org/pdf/2210.17437v2.pdf,2022-10-31,," It has been experimentally demonstrated that humans are able to learn in amanner that allows them to make predictions on categories for which they havenot seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020)have recently presented a machine learning approach that aims to do the same.They utilise synthetically generated data and demonstrate that it is possibleto achieve sub-linear scaling and develop models that can learn to recognise Nclasses from M training samples where M is less than N - aka less-than-one shotlearning. Their method was, however, defined for univariate or simplemultivariate data (Sucholutsky et al., 2021). We extend it to work on large,high-dimensional and real-world datasets and empirically validate it in thisnew and challenging setting. We apply this method to learn previously unseenNLP tasks from very few examples (4, 8 or 16). We first generate compact,sophisticated less-than-one shot representations called soft-label prototypeswhich are fitted on training data, capturing the distribution of differentclasses across the input domain space. We then use a modified k-NearestNeighbours classifier to demonstrate that soft-label prototypes can classifydata competitively, even outperforming much more computationally complexfew-shot learning methods.",,arXiv,"['cs.lg', 'cs.cl']",, explicit knowledge transfer for weaklysupervised code generation,"['Zhangir Azerbayev', 'Ansong Ni', 'Hailey Schoelkopf', 'Dragomir Radev']",http://arxiv.org/pdf/2211.16740v3.pdf,2022-11-30,," Large language models (LLMs) can acquire strong code-generation capabilitiesthrough few-shot learning. In contrast, supervised fine-tuning is still neededfor smaller models to achieve good performance. Such fine-tuning demands alarge number of task-specific NL-code pairs, which are expensive to obtain. Inthis paper, we attempt to transfer the code generation ability of an LLM to asmaller model with the aid of weakly-supervised data. More specifically, wepropose explicit knowledge transfer (EKT), which uses the few-shot capabilitiesof a teacher LLM to create NL-code pairs that we then filter for correctnessand fine-tune the student on. We evaluate EKT on the task of generating codesolutions to math word problems from the GSM8k dataset. We find that EKT notonly yields better performance than training with expert iteration, but alsooutperforms knowledge distillation, another form of knowledge transfer. AGPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4%pass@100 on GSM8k, while the same student and teacher trained with knowledgedistillation yield only a 3.7% pass@100. We also show that it is possible for astudent model to outperform the teacher using EKT.",,arXiv,['cs.cl'],, can incontext learners learn a reasoning concept from demonstrations,"['Michal Štefánik', 'Marek Kadlčík']",http://arxiv.org/pdf/2212.01692v4.pdf,2022-12-03,," Language models exhibit an emergent ability to learn a new task from a smallnumber of input-output demonstrations. However, recent work shows thatin-context learners largely rely on their pre-trained knowledge, such as thesentiment of the labels, instead of learning new associations from the input.We argue that the commonly-used few-shot evaluation using a random selection ofin-context demonstrations can not disentangle models' reliance on such biases,as most of the randomly-selected demonstrations do not present relationsinformative for prediction beyond exposing the task's input-outputdistribution. Therefore, to evaluate models' in-context learning ability independent ofmodels' memory, we introduce a Concept-sharing few-shot learning methodchoosing the demonstrations that share an underlying concept with the predictedsample. We extract a set of such concepts from available human explanations andmeasure how much models can benefit from presenting these concepts in few-shotdemonstrations. We find that most of the recent in-context learners can not consistentlybenefit from the demonstrated concepts, irrespective of the model size.However, we note that T0 models are more sensitive to exhibited concepts,benefiting from concept-sharing demonstrations in 7 out of 8 evaluationscenarios.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, federated fewshot learning for mobile nlp,"['Dongqi Cai', 'Shangguang Wang', 'Yaozong Wu', 'Felix Xiaozhu Lin', 'Mengwei Xu']",http://arxiv.org/pdf/2212.05974v2.pdf,2022-12-12,," Natural language processing (NLP) sees rich mobile applications. To supportvarious language understanding tasks, a foundation NLP model is oftenfine-tuned in a federated, privacy-preserving setting (FL). This processcurrently relies on at least hundreds of thousands of labeled training samplesfrom mobile clients; yet mobile users often lack willingness or knowledge tolabel their data. Such an inadequacy of data labels is known as a few-shotscenario; it becomes the key blocker for mobile NLP applications. For the first time, this work investigates federated NLP in the few-shotscenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling andprompt learning, we first establish a training pipeline that deliverscompetitive accuracy when only 0.05% (fewer than 100) of the training data islabeled and the remaining is unlabeled. To instantiate the workflow, we furtherpresent a system FeS, addressing the high execution cost with novel designs.(1) Curriculum pacing, which injects pseudo labels to the training workflow ata rate commensurate to the learning progress; (2) Representational diversity, amechanism for selecting the most learnable data, only for which pseudo labelswill be generated; (3) Co-planning of a model's training depth and layercapacity. Together, these designs reduce the training delay, client energy, andnetwork traffic by up to 46.0$\times$, 41.2$\times$ and 3000.0$\times$,respectively. Through algorithm/system co-design, FFNLP demonstrates that FLcan apply to challenging settings where most training samples are unlabeled.",,arXiv,"['cs.lg', 'cs.cl']",, fewfedweight fewshot federated learning framework across multiple nlp tasks,"['Weilong Dong', 'Xinwei Wu', 'Junzhuo Li', 'Shuangzhi Wu', 'Chao Bian', 'Deyi Xiong']",http://arxiv.org/pdf/2212.08354v1.pdf,2022-12-16,," Massively multi-task learning with large language models has recently madesubstantial progress on few-shot generalization. However, this is usuallyperformed in a centralized learning fashion, ignoring the privacy sensitivityissue of (annotated) data used in multiple tasks. To mitigate this issue, wepropose FewFedWeight, a few-shot federated learning framework across multipletasks, to achieve the best of both worlds: privacy preservation and cross-taskgeneralization. FewFedWeight trains client models in isolated devices withoutsharing data. It broadcasts the global model in the server to each client andproduces pseudo data for clients so that knowledge from the global model can beexplored to enhance few-shot learning of each client model. An energy-basedalgorithm is further proposed to weight pseudo samples in order to reduce thenegative impact of noise from the generated pseudo data. Adaptive model weightsof client models are also tuned according to their performance. We use thesemodel weights to dynamically aggregate client models to update the globalmodel. Experiments on 118 NLP tasks show that FewFedWeight can significantlyimprove the performance of client models on 61% tasks with an averageperformance improvement rate of 30.5% over the baseline and substantiallyoutperform FedAvg and other decentralized learning methods.",,arXiv,['cs.cl'],, contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning,"['Chris Lengerich', 'Gabriel Synnaeve', 'Amy Zhang', 'Hugh Leather', 'Kurt Shuster', 'François Charton', 'Charysse Redwood']",http://arxiv.org/pdf/2212.11353v1.pdf,2022-12-21,," Traditional approaches to RL have focused on learning decision policiesdirectly from episodic decisions, while slowly and implicitly learning thesemantics of compositional representations needed for generalization. Whilesome approaches have been adopted to refine representations via auxiliaryself-supervised losses while simultaneously learning decision policies,learning compositional representations from hand-designed andcontext-independent self-supervised losses (multi-view) still adapts relativelyslowly to the real world, which contains many non-IID subspaces requiring rapiddistribution shift in both time and spatial attention patterns at varyinglevels of abstraction. In contrast, supervised language model cascades haveshown the flexibility to adapt to many diverse manifolds, and hints ofself-learning needed for autonomous task transfer. However, to date, transfermethods for language models like few-shot learning and fine-tuning stillrequire human supervision and transfer learning using self-learning methods hasbeen underexplored. We propose a self-supervised loss policy called contrastivedistillation which manifests latent variables with high mutual information withboth source and target tasks from weights to tokens. We show how thisoutperforms common methods of transfer learning and suggests a useful designaxis of trading off compute for generalizability for online transfer.Contrastive distillation is improved through sampling from memory and suggestsa simple algorithm for more efficiently sampling negative examples forcontrastive losses than random sampling.",,arXiv,"['cs.cl', 'cs.lg']",, exploring efficient fewshot adaptation for vision transformers,"['Chengming Xu', 'Siqian Yang', 'Yabiao Wang', 'Zhanxiong Wang', 'Yanwei Fu', 'Xiangyang Xue']",http://arxiv.org/pdf/2301.02419v1.pdf,2023-01-06,," The task of Few-shot Learning (FSL) aims to do the inference on novelcategories containing only few labeled examples, with the help of knowledgelearned from base categories containing abundant labeled training samples.While there are numerous works into FSL task, Vision Transformers (ViTs) haverarely been taken as the backbone to FSL with few trials focusing on naivefinetuning of whole backbone or classification layer.} Essentially, despiteViTs have been shown to enjoy comparable or even better performance on othervision tasks, it is still very nontrivial to efficiently finetune the ViTs inreal-world FSL scenarios. To this end, we propose a novel efficient TransformerTuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The keynovelties come from the newly presented Attentive Prefix Tuning (APT) andDomain Residual Adapter (DRA) for the task and backbone tuning, individually.Specifically, in APT, the prefix is projected to new key and value pairs thatare attached to each self-attention layer to provide the model withtask-specific information. Moreover, we design the DRA in the form of learnableoffset vectors to handle the potential domain gaps between base and novel data.To ensure the APT would not deviate from the initial task-specific informationmuch, we further propose a novel prototypical regularization, which maximizesthe similarity between the projected distribution of prefix and initialprototypes, regularizing the update procedure. Our method receives outstandingperformance on the challenging Meta-Dataset. We conduct extensive experimentsto show the efficacy of our model.",,arXiv,['cs.cv'],, unleashing the power of shared label structures for human activity recognition,"['Xiyuan Zhang', 'Ranak Roy Chowdhury', 'Jiayun Zhang', 'Dezhi Hong', 'Rajesh K. Gupta', 'Jingbo Shang']",http://arxiv.org/pdf/2301.03462v2.pdf,2023-01-01,," Current human activity recognition (HAR) techniques regard activity labels asinteger class IDs without explicitly modeling the semantics of class labels. Weobserve that different activity names often have shared structures. Forexample, ""open door"" and ""open fridge"" both have ""open"" as the action; ""kickingsoccer ball"" and ""playing tennis ball"" both have ""ball"" as the object. Suchshared structures in label names can be translated to the similarity in sensorydata and modeling common structures would help uncover knowledge acrossdifferent activities, especially for activities with limited samples. In thispaper, we propose SHARE, a HAR framework that takes into account sharedstructures of label names for different activities. To exploit the sharedstructures, SHARE comprises an encoder for extracting features from inputsensory time series and a decoder for generating label names as a tokensequence. We also propose three label augmentation techniques to help the modelmore effectively capture semantic structures across activities, including abasic token-level augmentation, and two enhanced embedding-level andsequence-level augmentations utilizing the capabilities of pre-trained models.SHARE outperforms state-of-the-art HAR models in extensive experiments on sevenHAR benchmark datasets. We also evaluate in few-shot learning and labelimbalance settings and observe even more significant performance gap.",,arXiv,"['cs.lg', 'cs.ai', 'eess.sp']",, "see, think, confirm interactive prompting between vision and language models for knowledgebased visual reasoning","['Zhenfang Chen', 'Qinhong Zhou', 'Yikang Shen', 'Yining Hong', 'Hao Zhang', 'Chuang Gan']",http://arxiv.org/pdf/2301.05226v1.pdf,2023-01-12,," Large pre-trained vision and language models have demonstrated remarkablecapacities for various tasks. However, solving the knowledge-based visualreasoning tasks remains challenging, which requires a model to comprehensivelyunderstand image content, connect the external world knowledge, and performstep-by-step reasoning to answer the questions correctly. To this end, wepropose a novel framework named Interactive Prompting Visual Reasoner (IPVR)for few-shot knowledge-based visual reasoning. IPVR contains three stages, see,think and confirm. The see stage scans the image and grounds the visual conceptcandidates with a visual perception model. The think stage adopts a pre-trainedlarge language model (LLM) to attend to the key concepts from candidatesadaptively. It then transforms them into text context for prompting with avisual captioning model and adopts the LLM to generate the answer. The confirmstage further uses the LLM to generate the supporting rationale to the answer,verify the generated rationale with a cross-modality classifier and ensure thatthe rationale can infer the predicted output consistently. We conductexperiments on a range of knowledge-based visual reasoning datasets. We foundour IPVR enjoys several benefits, 1). it achieves better performance than theprevious few-shot learning baselines; 2). it enjoys the total transparency andtrustworthiness of the whole reasoning process by providing rationales for eachreasoning step; 3). it is computation-efficient compared with other fine-tuningbaselines.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, large language models are latent variable models explaining and finding good demonstrations for incontext learning,"['Xinyi Wang', 'Wanrong Zhu', 'Michael Saxon', 'Mark Steyvers', 'William Yang Wang']",http://arxiv.org/pdf/2301.11916v4.pdf,2023-01-27,," In recent years, pre-trained large language models (LLMs) have demonstratedremarkable efficiency in achieving an inference-time few-shot learningcapability known as in-context learning. However, existing literature hashighlighted the sensitivity of this capability to the selection of few-shotdemonstrations. Current understandings of the underlying mechanisms by whichthis capability arises from regular language model pretraining objectivesremain disconnected from the real-world LLMs. This study aims to examine thein-context learning phenomenon through a Bayesian lens, viewing real-world LLMsas latent variable models. On this premise, we propose an algorithm to selectoptimal demonstrations from a set of annotated data with a small LM, and thendirectly generalize the selected demonstrations to larger LMs. We demonstratesignificant improvement over baselines, averaged over eight GPT models on eightreal-world text classification datasets. We also demonstrate the real-worldusefulness of our algorithm on GSM8K, a math word problem dataset. Ourempirical findings support our hypothesis that LLMs implicitly infer a latentvariable containing task information.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, language quantized autoencoders towards unsupervised textimage alignment,"['Hao Liu', 'Wilson Yan', 'Pieter Abbeel']",http://arxiv.org/pdf/2302.00902v2.pdf,2023-02-02,," Recent progress in scaling up large language models has shown impressivecapabilities in performing few-shot learning across a wide range of text-basedtasks. However, a key limitation is that these language models fundamentallylack visual perception - a crucial attribute needed to extend these models tobe able to interact with the real world and solve vision tasks, such as invisual-question answering and robotics. Prior works have largely connectedimage to text through pretraining and/or fine-tuning on curated image-textdatasets, which can be a costly and expensive process. In order to resolve thislimitation, we propose a simple yet effective approach calledLanguage-Quantized AutoEncoder (LQAE), a modification of VQ-VAE that learns toalign text-image data in an unsupervised manner by leveraging pretrainedlanguage models (e.g., BERT, RoBERTa). Our main idea is to encode image assequences of text tokens by directly quantizing image embeddings using apretrained language codebook. We then apply random masking followed by a BERTmodel, and have the decoder reconstruct the original image from BERT predictedtext token embeddings. By doing so, LQAE learns to represent similar imageswith similar clusters of text tokens, thereby aligning these two modalitieswithout the use of aligned text-image pairs. This enables few-shot imageclassification with large language models (e.g., GPT-3) as well as linearclassification of images based on BERT text features. To the best of ourknowledge, our work is the first work that uses unaligned images for multimodaltasks by leveraging the power of pretrained language models.",,arXiv,"['cs.lg', 'cs.cl', 'cs.cv']",, the unreasonable effectiveness of fewshot learning for machine translation,"['Xavier Garcia', 'Yamini Bansal', 'Colin Cherry', 'George Foster', 'Maxim Krikun', 'Fangxiaoyu Feng', 'Melvin Johnson', 'Orhan Firat']",http://arxiv.org/pdf/2302.01398v1.pdf,2023-02-02,," We demonstrate the potential of few-shot translation systems, trained withunpaired language data, for both high and low-resource language pairs. We showthat with only 5 examples of high-quality translation data shown at inference,a transformer decoder-only model trained solely with self-supervised learning,is able to match specialized supervised state-of-the-art models as well as moregeneral commercial translation systems. In particular, we outperform the bestperforming system on the WMT'21 English - Chinese news translation task by onlyusing five examples of English - Chinese parallel data at inference. Moreover,our approach in building these models does not necessitate joint multilingualtraining or back-translation, is conceptually simple and shows the potential toextend to the multilingual setting. Furthermore, the resulting models are twoorders of magnitude smaller than state-of-the-art language models. We thenanalyze the factors which impact the performance of few-shot translationsystems, and highlight that the quality of the few-shot demonstrations heavilydetermines the quality of the translations generated by our models. Finally, weshow that the few-shot paradigm also provides a way to control certainattributes of the translation -- we show that we are able to control forregional varieties and formality using only a five examples at inference,paving the way towards controllable machine translation systems.",,arXiv,['cs.cl'],, crosscodebench benchmarking crosstask generalization of source code models,"['Changan Niu', 'Chuanyi Li', 'Vincent Ng', 'Bin Luo']",http://arxiv.org/pdf/2302.04030v2.pdf,2023-02-08,," Despite the recent advances showing that a model pre-trained on large-scalesource code data is able to gain appreciable generalization capability, itstill requires a sizeable amount of data on the target task for fine-tuning.And the effectiveness of the model generalization is largely affected by thesize and quality of the fine-tuning data, which is detrimental for target taskswith limited or unavailable resources. Therefore, cross-task generalization,with the goal of improving the generalization of the model to unseen tasks thathave not been seen before, is of strong research and application value. In this paper, we propose a large-scale benchmark that includes 216 existingcode-related tasks. Then, we annotate each task with the corresponding metainformation such as task description and instruction, which contains detailedinformation about the task and a solution guide. This also helps us to easilycreate a wide variety of ``training/evaluation'' task splits to evaluate thevarious cross-task generalization capabilities of the model. Then we performsome preliminary experiments to demonstrate that the cross-task generalizationof models can be largely improved by in-context learning methods such asfew-shot learning and learning from task instructions, which shows thepromising prospects of conducting cross-task learning research on ourbenchmark. We hope that the collection of the datasets and our benchmark willfacilitate future work that is not limited to cross-task generalization.",,arXiv,"['cs.se', 'cs.ai']",, revilm retrievalaugmented visual language model for zero and fewshot image captioning,"['Zhuolin Yang', 'Wei Ping', 'Zihan Liu', 'Vijay Korthikanti', 'Weili Nie', 'De-An Huang', 'Linxi Fan', 'Zhiding Yu', 'Shiyi Lan', 'Bo Li', 'Ming-Yu Liu', 'Yuke Zhu', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Chaowei Xiao', 'Anima Anandkumar']",http://arxiv.org/pdf/2302.04858v2.pdf,2023-02-09,," Augmenting pretrained language models (LMs) with a vision encoder (e.g.,Flamingo) has obtained the state-of-the-art results in image-to-textgeneration. However, these models store all the knowledge within theirparameters, thus often requiring enormous model parameters to model theabundant visual concepts and very rich textual descriptions. Additionally, theyare inefficient in incorporating new data, requiring a computational-expensivefine-tuning process. In this work, we introduce a Retrieval-augmented VisualLanguage Model, Re-ViLM, built upon the Flamingo, that supports retrieving therelevant knowledge from the external database for zero and in-context few-shotimage-to-text generations. By storing certain knowledge explicitly in theexternal database, our approach reduces the number of model parameters and caneasily accommodate new data during evaluation by simply updating the database.We also construct an interleaved image and text data that facilitatesin-context few-shot learning capabilities. We demonstrate that Re-ViLMsignificantly boosts performance for image-to-text generation tasks, especiallyfor zero-shot and few-shot generation in out-of-domain settings with 4 timesless parameters compared with baseline methods.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.ir', 'cs.lg']",, maskguided bert for few shot text classification,"['Wenxiong Liao', 'Zhengliang Liu', 'Haixing Dai', 'Zihao Wu', 'Yiyang Zhang', 'Xiaoke Huang', 'Yuzhong Chen', 'Xi Jiang', 'Wei Liu', 'Dajiang Zhu', 'Tianming Liu', 'Sheng Li', 'Xiang Li', 'Hongmin Cai']",http://arxiv.org/pdf/2302.10447v3.pdf,2023-02-21,," Transformer-based language models have achieved significant success invarious domains. However, the data-intensive nature of the transformerarchitecture requires much labeled data, which is challenging in low-resourcescenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is thedifficulty of training robust models on small amounts of samples, whichfrequently leads to overfitting. Here we present Mask-BERT, a simple andmodular framework to help BERT-based architectures tackle FSL. The proposedapproach fundamentally differs from existing FSL strategies such as prompttuning and meta-learning. The core idea is to selectively apply masks on textinputs and filter out irrelevant information, which guides the model to focuson discriminative tokens that influence prediction results. In addition, tomake the text representations from different categories more separable and thetext representations from the same category more compact, we introduce acontrastive learning loss function. Experimental results on public-domainbenchmark datasets demonstrate the effectiveness of Mask-BERT.",,arXiv,"['cs.cl', 'cs.ai']",, metalearning with adaptive weighted loss for imbalanced coldstart recommendation,"['Minchang Kim', 'Yongjin Yang', 'Jung Hyun Ryu', 'Taesup Kim']",http://arxiv.org/pdf/2302.14640v2.pdf,2023-02-28,," Sequential recommenders have made great strides in capturing a user'spreferences. Nevertheless, the cold-start recommendation remains a fundamentalchallenge as they typically involve limited user-item interactions forpersonalization. Recently, gradient-based meta-learning approaches have emergedin the sequential recommendation field due to their fast adaptation andeasy-to-integrate abilities. The meta-learning algorithms formulate thecold-start recommendation as a few-shot learning problem, where each user isrepresented as a task to be adapted. While meta-learning algorithms generallyassume that task-wise samples are evenly distributed over classes or values,user-item interactions in real-world applications do not conform to such adistribution (e.g., watching favorite videos multiple times, leaving onlypositive ratings without any negative ones). Consequently, imbalanced userfeedback, which accounts for the majority of task training data, may dominatethe user adaptation process and prevent meta-learning algorithms from learningmeaningful meta-knowledge for personalized recommendations. To alleviate thislimitation, we propose a novel sequential recommendation framework based ongradient-based meta-learning that captures the imbalanced rating distributionof each user and computes adaptive loss for user-specific learning. Our work isthe first to tackle the impact of imbalanced ratings in cold-start sequentialrecommendation scenarios. Through extensive experiments conducted on real-worlddatasets, we demonstrate the effectiveness of our framework.",,arXiv,"['cs.ir', 'cs.lg']",, knowledgeaugmented fewshot visual relation detection,"['Tianyu Yu', 'Yangning Li', 'Jiaoyan Chen', 'Yinghui Li', 'Hai-Tao Zheng', 'Xi Chen', 'Qingbin Liu', 'Wenqiang Liu', 'Dongxiao Huang', 'Bei Wu', 'Yexin Wang']",http://arxiv.org/pdf/2303.05342v1.pdf,2023-03-09,," Visual Relation Detection (VRD) aims to detect relationships between objectsfor image understanding. Most existing VRD methods rely on thousands oftraining samples of each relationship to achieve satisfactory performance. Somerecent papers tackle this problem by few-shot learning with elaboratelydesigned pipelines and pre-trained word vectors. However, the performance ofexisting few-shot VRD models is severely hampered by the poor generalizationcapability, as they struggle to handle the vast semantic diversity of visualrelationships. Nonetheless, humans have the ability to learn new relationshipswith just few examples based on their knowledge. Inspired by this, we devise aknowledge-augmented, few-shot VRD framework leveraging both textual knowledgeand visual relation knowledge to improve the generalization ability of few-shotVRD. The textual knowledge and visual relation knowledge are acquired from apre-trained language model and an automatically constructed visual relationknowledge graph, respectively. We extensively validate the effectiveness of ourframework. Experiments conducted on three benchmarks from the commonly usedVisual Genome dataset show that our performance surpasses existingstate-of-the-art models with a large improvement.",,arXiv,"['cs.cv', 'cs.ai']",, hqp a humanannotated dataset for detecting online propaganda,"['Abdurahman Maarouf', 'Dominik Bär', 'Dominique Geissler', 'Stefan Feuerriegel']",http://arxiv.org/pdf/2304.14931v2.pdf,2023-04-28,," Online propaganda poses a severe threat to the integrity of societies.However, existing datasets for detecting online propaganda have a keylimitation: they were annotated using weak labels that can be noisy and evenincorrect. To address this limitation, our work makes the followingcontributions: (1) We present HQP: a novel dataset (N=30,000) for detectingonline propaganda with high-quality labels. To the best of our knowledge, HQPis the first dataset for detecting online propaganda that was created throughhuman annotation. (2) We show empirically that state-of-the-art language modelsfail in detecting online propaganda when trained with weak labels (AUC: 64.03).In contrast, state-of-the-art language models can accurately detect onlinepropaganda when trained with our high-quality labels (AUC: 92.25), which is animprovement of ~44%. (3) To address the cost of labeling, we extend our work tofew-shot learning. Specifically, we show that prompt-based learning using asmall sample of high-quality labels can still achieve a reasonable performance(AUC: 80.27). Finally, we discuss implications for the NLP community to balancethe cost and quality of labeling. Crucially, our work highlights the importanceof high-quality labels for sensitive NLP tasks such as propaganda detection.",,arXiv,['cs.cl'],, parameterefficient crosslingual transfer of vision and language models via translationbased alignment,"['Zhen Zhang', 'Jialu Wang', 'Xin Eric Wang']",http://arxiv.org/pdf/2305.03510v2.pdf,2023-05-02,," Pre-trained vision and language models such as CLIP have witnessed remarkablesuccess in connecting images and texts with a primary focus on English texts.Despite recent efforts to extend CLIP to support other languages, disparitiesin performance among different languages have been observed due to unevenresource availability. Additionally, current cross-lingual transfer methods ofthose pre-trained models would consume excessive resources for a large numberof languages. Therefore, we propose a new parameter-efficient cross-lingualtransfer learning framework that utilizes a translation-based alignment methodto mitigate multilingual disparities and explores parameter-efficientfine-tuning methods for parameter-efficient cross-lingual transfer. Extensiveexperiments on XTD and Multi30K datasets, covering 11 languages underzero-shot, few-shot, and full-dataset learning scenarios, show that ourframework significantly reduces the multilingual disparities among languagesand improves cross-lingual transfer results, especially in low-resourcescenarios, while only keeping and fine-tuning an extremely small number ofparameters compared to the full model (e.g., Our framework only requires 0.16\%additional parameters of a full-model for each language in the few-shotlearning scenario). The codes are available at\url{https://github.com/eric-ai-lab/PECTVLM}. The codes are available at\url{https://github.com/eric-ai-lab/PECTVLM}.",,arXiv,"['cs.cl', 'cs.ai']",, sentiment analysis in the era of large language models a reality check,"['Wenxuan Zhang', 'Yue Deng', 'Bing Liu', 'Sinno Jialin Pan', 'Lidong Bing']",http://arxiv.org/pdf/2305.15005v1.pdf,2023-05-24,," Sentiment analysis (SA) has been a long-standing research area in naturallanguage processing. It can offer rich insights into human sentiments andopinions and has thus seen considerable interest from both academia andindustry. With the advent of large language models (LLMs) such as ChatGPT,there is a great potential for their employment on SA problems. However, theextent to which existing LLMs can be leveraged for different sentiment analysistasks remains unclear. This paper aims to provide a comprehensive investigationinto the capabilities of LLMs in performing various sentiment analysis tasks,from conventional sentiment classification to aspect-based sentiment analysisand multifaceted analysis of subjective texts. We evaluate performance across13 tasks on 26 datasets and compare the results against small language models(SLMs) trained on domain-specific datasets. Our study reveals that while LLMsdemonstrate satisfactory performance in simpler tasks, they lag behind in morecomplex tasks requiring deeper understanding or structured sentimentinformation. However, LLMs significantly outperform SLMs in few-shot learningsettings, suggesting their potential when annotation resources are limited. Wealso highlight the limitations of current evaluation practices in assessingLLMs' SA abilities and propose a novel benchmark, \textsc{SentiEval}, for amore comprehensive and realistic evaluation. Data and code during ourinvestigations are available at\url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.",,arXiv,['cs.cl'],, impact of large language models on generating software specifications,"['Danning Xie', 'Byungwoo Yoo', 'Nan Jiang', 'Mijung Kim', 'Lin Tan', 'Xiangyu Zhang', 'Judy S. Lee']",http://arxiv.org/pdf/2306.03324v2.pdf,2023-06-06,," Software specifications are essential for ensuring the reliability ofsoftware systems. Existing specification extraction approaches, however, sufferfrom limited generalizability and require manual efforts. The recent emergenceof Large Language Models (LLMs), which have been successfully applied tonumerous software engineering tasks, offers a promising avenue for automatingthis process. In this paper, we conduct the first empirical study to evaluatethe capabilities of LLMs for generating software specifications from softwarecomments or documentation. We evaluate LLMs' performance with Few Shot Learning(FSL), enabling LLMs to generalize from a small number of examples, as well asdifferent prompt construction strategies, and compare the performance of LLMswith traditional approaches. Additionally, we conduct a comparative diagnosisof the failure cases from both LLMs and traditional methods, identifying theirunique strengths and weaknesses. Lastly, we conduct extensive experiments on 15state of the art LLMs, evaluating their performance and cost effectiveness forgenerating software specifications. Our results show that with FSL, LLMs outperform traditional methods (by5.6%), and more sophisticated prompt construction strategies can furtherenlarge this performance gap (up to 5.1 to 10.0%). Yet, LLMs suffer from theirunique challenges, such as ineffective prompts and the lack of domainknowledge, which together account for 53 to 60% of LLM unique failures. Thestrong performance of open source models (e.g., StarCoder) makes closed sourcemodels (e.g., GPT 3 Davinci) less desirable due to size and cost. Our studyoffers valuable insights for future research to improve specificationgeneration.",,arXiv,['cs.se'],, prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation,"['Balamurali Murugesan', 'Rukhshanda Hussain', 'Rajarshi Bhattacharya', 'Ismail Ben Ayed', 'Jose Dolz']",http://arxiv.org/pdf/2307.00097v3.pdf,2023-06-30,," Recently, CLIP-based approaches have exhibited remarkable performance ongeneralization and few-shot learning tasks, fueled by the power of contrastivelanguage-vision pre-training. In particular, prompt tuning has emerged as aneffective strategy to adapt the pre-trained language-vision models todownstream tasks by employing task-related textual tokens. Motivated by thisprogress, in this work we question whether other fundamental problems, such asweakly supervised semantic segmentation (WSSS), can benefit from prompt tuning.Our findings reveal two interesting observations that shed light on the impactof prompt tuning on WSSS. First, modifying only the class token of the textprompt results in a greater impact on the Class Activation Map (CAM), comparedto arguably more complex strategies that optimize the context. And second, theclass token associated with the image ground truth does not necessarilycorrespond to the category that yields the best CAM. Motivated by theseobservations, we introduce a novel approach based on a PrOmpt cLass lEarning(POLE) strategy. Through extensive experiments we demonstrate that our simple,yet efficient approach achieves SOTA performance in a well-known WSSSbenchmark. These results highlight not only the benefits of language-visionmodels in WSSS but also the potential of prompt learning for this problem. Thecode is available at https://github.com/rB080/WSS_POLE.",,arXiv,['cs.cv'],, text descriptions are compressive and invariant representations for visual learning,"['Zhili Feng', 'Anna Bair', 'J. Zico Kolter']",http://arxiv.org/pdf/2307.04317v2.pdf,2023-07-10,," Modern image classification is based upon directly predicting classes vialarge discriminative networks, which do not directly contain information aboutthe intuitive visual features that may constitute a classification decision.Recently, work in vision-language models (VLM) such as CLIP has provided waysto specify natural language descriptions of image classes, but typicallyfocuses on providing single descriptions for each class. In this work, wedemonstrate that an alternative approach, in line with humans' understanding ofmultiple visual features per class, can also provide compelling performance inthe robust few-shot learning setting. In particular, we introduce a novelmethod, \textit{SLR-AVD (Sparse Logistic Regression using Augmented VisualDescriptors)}. This method first automatically generates multiple visualdescriptions of each class via a large language model (LLM), then uses a VLM totranslate these descriptions to a set of visual feature embeddings of eachimage, and finally uses sparse logistic regression to select a relevant subsetof these features to classify each image. Core to our approach is the factthat, information-theoretically, these descriptive features are more invariantto domain shift than traditional image embeddings, even though the VLM trainingprocess is not explicitly designed for invariant representation learning. Theseinvariant descriptive features also compose a better input compression scheme.When combined with finetuning, we show that SLR-AVD is able to outperformexisting state-of-the-art finetuning approaches on both in-distribution andout-of-distribution performance.",,arXiv,"['cs.cv', 'cs.lg']",, dialogstudio towards richest and most diverse unified dataset collection for conversational ai,"['Jianguo Zhang', 'Kun Qian', 'Zhiwei Liu', 'Shelby Heinecke', 'Rui Meng', 'Ye Liu', 'Zhou Yu', 'Huan Wang', 'Silvio Savarese', 'Caiming Xiong']",http://arxiv.org/pdf/2307.10172v3.pdf,2023-07-19,," Despite advancements in conversational AI, language models encounterchallenges to handle diverse conversational tasks, and existing dialoguedataset collections often lack diversity and comprehensiveness. To tackle theseissues, we introduce DialogStudio: the largest and most diverse collection ofdialogue datasets, unified under a consistent format while preserving theiroriginal information. Our collection encompasses data from open-domaindialogues, task-oriented dialogues, natural language understanding,conversational recommendation, dialogue summarization, and knowledge-groundeddialogues, making it an incredibly rich and diverse resource for dialogueresearch and model training. To further enhance the utility of DialogStudio, weidentify the licenses for each dataset, design external knowledge anddomain-aware prompts for selected dialogues to facilitate instruction-awarefine-tuning. Furthermore, we develop conversational AI models using the datasetcollection, and our experiments in both zero-shot and few-shot learningscenarios demonstrate the superiority of DialogStudio. To improve transparencyand support dataset and task-based research, as well as language modelpre-training, all datasets, licenses, codes, and models associated withDialogStudio are made publiclyaccessible\footnote{\url{https://github.com/salesforce/DialogStudio}}.",,arXiv,"['cs.cl', 'cs.ai']",, mutual reinforcement effects in japanese sentence classification and named entity recognition tasks,"['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori']",http://arxiv.org/pdf/2307.10291v2.pdf,2023-07-18,," Information extraction(IE) is a crucial subfield within natural languageprocessing. However, for the traditionally segmented approach to sentenceclassification and Named Entity Recognition, the intricate interactions betweenthese individual subtasks remain largely uninvestigated. In this study, wepropose an integrative analysis, converging sentence classification with NamedEntity Recognition, with the objective to unveil and comprehend the mutualreinforcement effect within these two information extraction subtasks. Toachieve this, we introduce a Sentence Classification and Named EntityRecognition Multi-task (SCNM) approach that combines Sentence Classification(SC) and Named Entity Recognition (NER). We develop a Sentence-to-LabelGeneration (SLG) framework for SCNM and construct a Wikipedia datasetcontaining both SC and NER. Using a format converter, we unify input formatsand employ a generative model to generate SC-labels, NER-labels, and associatedtext segments. We propose a Constraint Mechanism (CM) to improve generatedformat accuracy. Our results show SC accuracy increased by 1.13 points and NERby 1.06 points in SCNM compared to standalone tasks, with CM raising formataccuracy from 63.61 to 100. The findings indicate mutual reinforcement effectsbetween SC and NER, and integration enhances both tasks' performance. Weadditionally implemented the SLG framework on single SC task. It yieldedsuperior accuracies compared to the baseline on two distinct Japanese SCdatasets. Notably, in the experiment of few-shot learning, SLG framework showsmuch better performance than fine-tune method. These empirical findingscontribute additional evidence to affirm the efficacy of the SLG framework.",,arXiv,['cs.cl'],, chatgpt for arabic grammatical error correction,"['Sang Yun Kwon', 'Gagan Bhatia', 'El Moatez Billah Nagoud', 'Muhammad Abdul-Mageed']",http://arxiv.org/pdf/2308.04492v1.pdf,2023-08-08,," Recently, large language models (LLMs) fine-tuned to follow human instructionhave exhibited significant capabilities in various English NLP tasks. However,their performance in grammatical error correction (GEC) tasks, particularly innon-English languages, remains significantly unexplored. In this paper, wedelve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task madecomplex due to Arabic's rich morphology. Our findings suggest that variousprompting methods, coupled with (in-context) few-shot learning, demonstrateconsiderable effectiveness, with GPT-4 achieving up to $65.49$F\textsubscript{1} score under expert prompting (approximately $5$ pointshigher than our established baseline). This highlights the potential of LLMs inlow-resource settings, offering a viable approach for generating usefulsynthetic data for model training. Despite these positive results, we find thatinstruction fine-tuned models, regardless of their size, significantlyunderperform compared to fully fine-tuned models of significantly smallersizes. This disparity highlights a substantial room for improvements for LLMs.Inspired by methods from low-resource machine translation, we also develop amethod exploiting synthetic data that significantly outperforms previous modelson two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively.",,arXiv,['cs.ai'],, llmebench a flexible framework for accelerating llms benchmarking,"['Fahim Dalvi', 'Maram Hasanain', 'Sabri Boughorbel', 'Basel Mousi', 'Samir Abdaljalil', 'Nizi Nazar', 'Ahmed Abdelali', 'Shammur Absar Chowdhury', 'Hamdy Mubarak', 'Ahmed Ali', 'Majd Hawasly', 'Nadir Durrani', 'Firoj Alam']",http://arxiv.org/pdf/2308.04945v1.pdf,2023-08-09,," The recent development and success of Large Language Models (LLMs)necessitate an evaluation of their performance across diverse NLP tasks indifferent languages. Although several frameworks have been developed and madepublicly available, their customization capabilities for specific tasks anddatasets are often complex for different users. In this study, we introduce theLLMeBench framework. Initially developed to evaluate Arabic NLP tasks usingOpenAI's GPT and BLOOM models; it can be seamlessly customized for any NLP taskand model, regardless of language. The framework also features zero- andfew-shot learning settings. A new custom dataset can be added in less than 10minutes, and users can use their own model API keys to evaluate the task athand. The developed framework has been already tested on 31 unique NLP tasksusing 53 publicly available datasets within 90 experimental setups, involvingapproximately 296K data points. We plan to open-source the framework for thecommunity (https://github.com/qcri/LLMeBench/). A video demonstrating theframework is available online (https://youtu.be/FkQn4UjYA0s).",,arXiv,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",, codecot and beyond learning to program and test like a developer,"['Dong Huang', 'Qingwen Bu', 'Heming Cui']",http://arxiv.org/pdf/2308.08784v1.pdf,2023-08-17,," In natural language processing, transformer-based large language models(LLMs) like GPT-x models developed by OpenAI have revolutionized the landscape.Despite their impressive capabilities, these models often encounter challengeswhen handling tasks that differ from their training data, resulting incompromised performance. To address this, few-shot learning has emerged as avaluable technique, allowing LLMs to adapt with minimal task-specific data. Oneinnovative strategy, known as Chain-of-Thought Prompting (CoT), has beenintroduced to guide LLMs in revealing cognitive processes during multi-stepreasoning. In this paper, we propose Code Chain-of-Thought~(CodeCoT), whichconsists of two components: the Vanilla CodeCoT and the Self-exam CodeCoT. Thelatter incorporates self-examination, empowering the model to iterativelygenerate code, formulate test cases, and refine its outputs. Specifically, theprocess entails the generation of test examples by the model corresponding tothe code it is tasked to implement. If it fails on the test examples, then itregenerates the code based on the erroneous code and associated error types.Through comprehensive experiments, we observed that both techniquessignificantly enhance code generation accuracy across various LLM variants. Ourevaluation results reveal that CodeCoT improves the code generationeffectiveness, including an unprecedented pass@1 accuracy of 79.27\% using theSelf-exam CodeCoT approach on the gpt-3.5-turbo-0613 model in the HumanEvaldataset.",,arXiv,"['cs.se', 'cs.ai']",, diagnosing infeasible optimization problems using large language models,"['Hao Chen', 'Gonzalo E. Constante-Flores', 'Can Li']",http://arxiv.org/pdf/2308.12923v1.pdf,2023-08-23,," Decision-making problems can be represented as mathematical optimizationmodels, finding wide applications in fields such as economics, engineering andmanufacturing, transportation, and health care. Optimization models aremathematical abstractions of the problem of making the best decision whilesatisfying a set of requirements or constraints. One of the primary barriers todeploying these models in practice is the challenge of helping practitionersunderstand and interpret such models, particularly when they are infeasible,meaning no decision satisfies all the constraints. Existing methods fordiagnosing infeasible optimization models often rely on expert systems,necessitating significant background knowledge in optimization. In this paper,we introduce OptiChat, a first-of-its-kind natural language-based systemequipped with a chatbot GUI for engaging in interactive conversations aboutinfeasible optimization models. OptiChat can provide natural languagedescriptions of the optimization model itself, identify potential sources ofinfeasibility, and offer suggestions to make the model feasible. Theimplementation of OptiChat is built on GPT-4, which interfaces with anoptimization solver to identify the minimal subset of constraints that renderthe entire optimization problem infeasible, also known as the IrreducibleInfeasible Subset (IIS). We utilize few-shot learning, expert chain-of-thought,key-retrieve, and sentiment prompts to enhance OptiChat's reliability. Ourexperiments demonstrate that OptiChat assists both expert and non-expert usersin improving their understanding of the optimization models, enabling them toquickly identify the sources of infeasibility.",,arXiv,"['cs.hc', 'cs.cl', 'cs.lg', 'math.oc']",, "longbench a bilingual, multitask benchmark for long context understanding","['Yushi Bai', 'Xin Lv', 'Jiajie Zhang', 'Hongchang Lyu', 'Jiankai Tang', 'Zhidian Huang', 'Zhengxiao Du', 'Xiao Liu', 'Aohan Zeng', 'Lei Hou', 'Yuxiao Dong', 'Jie Tang', 'Juanzi Li']",http://arxiv.org/pdf/2308.14508v1.pdf,2023-08-28,," Although large language models (LLMs) demonstrate impressive performance formany language tasks, most of them can only handle texts a few thousand tokenslong, limiting their applications on longer sequence inputs, such as books,reports, and codebases. Recent works have proposed methods to improve LLMs'long context capabilities by extending context windows and more sophisticatedmemory mechanisms. However, comprehensive benchmarks tailored for evaluatinglong context understanding are lacking. In this paper, we introduce LongBench,the first bilingual, multi-task benchmark for long context understanding,enabling a more rigorous evaluation of long context understanding. LongBenchcomprises 21 datasets across 6 task categories in both English and Chinese,with an average length of 6,711 words (English) and 13,386 characters(Chinese). These tasks cover key long-text application areas includingsingle-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,and code completion. All datasets in LongBench are standardized into a unifiedformat, allowing for effortless automatic evaluation of LLMs. Uponcomprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercialmodel (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but stillstruggles on longer contexts. (2) Scaled position embedding and fine-tuning onlonger sequences lead to substantial improvement on long context understanding.(3) Context compression technique such as retrieval brings improvement formodel with weak ability on long contexts, but the performance still lags behindmodels that have strong long context understanding capability. The code anddatasets are available at https://github.com/THUDM/LongBench.",,arXiv,['cs.cl'],, zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model,"['Neel Bhate', 'Ansh Mittal', 'Zhe He', 'Xiao Luo']",http://arxiv.org/pdf/2309.05475v2.pdf,2023-09-11,," Demographics, Social determinants of health, and family history documented inthe unstructured text within the electronic health records are increasinglybeing studied to understand how this information can be utilized with thestructured data to improve healthcare outcomes. After the GPT models werereleased, many studies have applied GPT models to extract this information fromthe narrative clinical notes. Different from the existing work, our researchfocuses on investigating the zero-shot learning on extracting this informationtogether by providing minimum information to the GPT model. We utilizede-identified real-world clinical notes annotated for demographics, varioussocial determinants, and family history information. Given that the GPT modelmight provide text different from the text in the original data, we explore twosets of evaluation metrics, including the traditional NER evaluation metricsand semantic similarity evaluation metrics, to completely understand theperformance. Our results show that the GPT-3.5 method achieved an average of0.975 F1 on demographics extraction, 0.615 F1 on social determinantsextraction, and 0.722 F1 on family history extraction. We believe these resultscan be further improved through model fine-tuning or few-shots learning.Through the case studies, we also identified the limitations of the GPT models,which need to be addressed in future research.",,arXiv,['cs.cl'],, using large language model to solve and explain physics word problems approaching human level,"['Jingzhe Ding', 'Yan Cen', 'Xinyuan Wei']",http://arxiv.org/pdf/2309.08182v2.pdf,2023-09-15,," Our work demonstrates that large language model (LLM) pre-trained on textscan not only solve pure math word problems, but also physics word problems,whose solution requires calculation and inference based on prior physicalknowledge. We collect and annotate the first physics word problemdataset-PhysQA, which contains over 1000 junior high school physics wordproblems (covering Kinematics, Mass&Density, Mechanics, Heat, Electricity).Then we use OpenAI' s GPT3.5 to generate the answer of these problems and foundthat GPT3.5 could automatically solve 49.3% of the problems through zero-shotlearning and 73.2% through few-shot learning. This result demonstrates that byusing similar problems and their answers as prompt, LLM could solve elementaryphysics word problems approaching human level performance. In addition tosolving problems, GPT3.5 can also summarize the knowledge or topics covered bythe problems, provide relevant explanations, and generate new physics wordproblems based on the input. Our work is the first research to focus on theautomatic solving, explanation, and generation of physics word problems acrossvarious types and scenarios, and we achieve an acceptable and state-of-the-artaccuracy. This underscores the potential of LLMs for further applications insecondary education.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, nnsam plugandplay segment anything model improves nnunet performance,"['Yunxiang Li', 'Bowen Jing', 'Zihan Li', 'Jing Wang', 'You Zhang']",http://arxiv.org/pdf/2309.16967v2.pdf,2023-09-29,," The recent developments of foundation models in computer vision, especiallythe Segment Anything Model (SAM), allow scalable and domain-agnostic imagesegmentation to serve as a general-purpose segmentation tool. In parallel, thefield of medical image segmentation has benefited significantly fromspecialized neural networks like the nnUNet, which is trained ondomain-specific datasets and can automatically configure the network to tailorto specific segmentation challenges. To combine the advantages of foundationmodels and domain-specific models, we present nnSAM, which synergisticallyintegrates the SAM model with the nnUNet model to achieve more accurate androbust medical image segmentation. The nnSAM model leverages the powerful androbust feature extraction capabilities of SAM, while harnessing the automaticconfiguration capabilities of nnUNet to promote dataset-tailored learning. Ourcomprehensive evaluation of nnSAM model on different sizes of training samplesshows that it allows few-shot learning, which is highly relevant for medicalimage segmentation where high-quality, annotated data can be scarce and costlyto obtain. By melding the strengths of both its predecessors, nnSAM positionsitself as a potential new benchmark in medical image segmentation, offering atool that combines broad applicability with specialized efficiency. The code isavailable at https://github.com/Kent0n-Li/Medical-Image-Segmentation.",,arXiv,"['cs.cv', 'eess.iv']",, radit retrievalaugmented dual instruction tuning,"['Xi Victoria Lin', 'Xilun Chen', 'Mingda Chen', 'Weijia Shi', 'Maria Lomeli', 'Rich James', 'Pedro Rodriguez', 'Jacob Kahn', 'Gergely Szilvasy', 'Mike Lewis', 'Luke Zettlemoyer', 'Scott Yih']",http://arxiv.org/pdf/2310.01352v3.pdf,2023-10-02,," Retrieval-augmented language models (RALMs) improve performance by accessinglong-tail and up-to-date knowledge from external data stores, but arechallenging to build. Existing approaches require either expensiveretrieval-specific modifications to LM pre-training or use post-hoc integrationof the data store that leads to suboptimal performance. We introduceRetrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuningmethodology that provides a third option by retrofitting any LLM with retrievalcapabilities. Our approach operates in two distinct fine-tuning steps: (1) oneupdates a pre-trained LM to better use retrieved information, while (2) theother updates the retriever to return more relevant results, as preferred bythe LM. By fine-tuning over tasks that require both knowledge utilization andcontextual awareness, we demonstrate that each stage yields significantperformance improvements, and using both leads to additional gains. Our bestmodel, RA-DIT 65B, achieves state-of-the-art performance across a range ofknowledge-intensive zero- and few-shot learning benchmarks, significantlyoutperforming existing in-context RALM approaches by up to +8.9% in 0-shotsetting and +1.4% in 5-shot setting on average.",,arXiv,"['cs.cl', 'cs.ai']",, longllmlingua accelerating and enhancing llms in long context scenarios via prompt compression,"['Huiqiang Jiang', 'Qianhui Wu', 'Xufang Luo', 'Dongsheng Li', 'Chin-Yew Lin', 'Yuqing Yang', 'Lili Qiu']",http://arxiv.org/pdf/2310.06839v1.pdf,2023-10-10,," In long context scenarios, large language models (LLMs) face three mainchallenges: higher computational/financial cost, longer latency, and inferiorperformance. Some studies reveal that the performance of LLMs depends on boththe density and the position of the key information (question relevant) in theinput prompt. Inspired by these findings, we propose LongLLMLingua for promptcompression towards improving LLMs' perception of the key information tosimultaneously address the three challenges. We conduct evaluation on a widerange of long context scenarios including single-/multi-document QA, few-shotlearning, summarization, synthetic tasks, and code completion. The experimentalresults show that LongLLMLingua compressed prompt can derive higher performancewith much less cost. The latency of the end-to-end system is also reduced. Forexample, on NaturalQuestions benchmark, LongLLMLingua gains a performance boostof up to 17.1% over the original prompt with ~4x fewer tokens as input toGPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000samples from the LongBench and ZeroScrolls benchmark, respectively.Additionally, when compressing prompts of ~10k tokens at a compression rate of2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Ourcode is available at https://aka.ms/LLMLingua.",,arXiv,"['cs.cl', 'cs.lg']",, empower textattributed graphs learning with large language models (llms),"['Jianxiang Yu', 'Yuxiang Ren', 'Chenghua Gong', 'Jiaqi Tan', 'Xiang Li', 'Xuecang Zhang']",http://arxiv.org/pdf/2310.09872v1.pdf,2023-10-15,," Text-attributed graphs have recently garnered significant attention due totheir wide range of applications in web domains. Existing methodologies employword embedding models for acquiring text representations as node features,which are subsequently fed into Graph Neural Networks (GNNs) for training.Recently, the advent of Large Language Models (LLMs) has introduced theirpowerful capabilities in information retrieval and text generation, which cangreatly enhance the text attributes of graph data. Furthermore, the acquisitionand labeling of extensive datasets are both costly and time-consumingendeavors. Consequently, few-shot learning has emerged as a crucial problem inthe context of graph learning tasks. In order to tackle this challenge, wepropose a lightweight paradigm called ENG, which adopts a plug-and-playapproach to empower text-attributed graphs through node generation using LLMs.Specifically, we utilize LLMs to extract semantic information from the labelsand generate samples that belong to these categories as exemplars.Subsequently, we employ an edge predictor to capture the structural informationinherent in the raw dataset and integrate the newly generated samples into theoriginal graph. This approach harnesses LLMs for enhancing class-levelinformation and seamlessly introduces labeled nodes and edges without modifyingthe raw dataset, thereby facilitating the node classification task in few-shotscenarios. Extensive experiments demonstrate the outstanding performance of ourproposed paradigm, particularly in low-shot scenarios. For instance, in the1-shot setting of the ogbn-arxiv dataset, ENG achieves a 76% improvement overthe baseline model.",,arXiv,['cs.lg'],, incontext learning with iterative demonstration selection,"['Chengwei Qin', 'Aston Zhang', 'Anirudh Dagar', 'Wenming Ye']",http://arxiv.org/pdf/2310.09881v2.pdf,2023-10-15,," Spurred by advancements in scale, large language models (LLMs) havedemonstrated strong few-shot learning ability via in-context learning (ICL).However, the performance of ICL has been shown to be highly sensitive to theselection of few-shot demonstrations. Selecting the most suitable examples ascontext remains an ongoing challenge and an open problem. Existing literaturehas highlighted the importance of selecting examples that are diverse orsemantically similar to the test sample while ignoring the fact that theoptimal selection dimension, i.e., diversity or similarity, is task-specific.Leveraging the merits of both dimensions, we propose Iterative DemonstrationSelection (IDS). Using zero-shot chain-of-thought reasoning (Zero-shot-CoT),IDS iteratively selects examples that are diverse but still strongly correlatedwith the test sample as ICL demonstrations. Specifically, IDS appliesZero-shot-CoT to the test sample before demonstration selection. The outputreasoning path is then used to choose demonstrations that are prepended to thetest sample for inference. The generated answer is accompanied by itscorresponding reasoning path for extracting a new set of demonstrations in thenext iteration. After several iterations, IDS adopts majority voting to obtainthe final result. Through extensive experiments on tasks including commonsensereasoning, question answering, topic classification, and sentiment analysis, wedemonstrate that IDS can consistently outperform existing ICL demonstrationselection methods.",,arXiv,"['cs.cl', 'cs.ai']",, the skipped beat a study of sociopragmatic understanding in llms for 64 languages,"['Chiyu Zhang', 'Khai Duy Doan', 'Qisheng Liao', 'Muhammad Abdul-Mageed']",http://arxiv.org/pdf/2310.14557v1.pdf,2023-10-23,," Instruction tuned large language models (LLMs), such as ChatGPT, demonstrateremarkable performance in a wide range of tasks. Despite numerous recentstudies that examine the performance of instruction-tuned LLMs on various NLPbenchmarks, there remains a lack of comprehensive investigation into theirability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaningembedded within social and interactive contexts. This deficiency arises partlyfrom SM not being adequately represented in any of the existing benchmarks. Toaddress this gap, we present SPARROW, an extensive multilingual benchmarkspecifically designed for SM understanding. SPARROW comprises 169 datasetscovering 13 task types across six primary categories (e.g., anti-sociallanguage detection, emotion recognition). SPARROW datasets encompass 64different languages originating from 12 language families representing 16writing scripts. We evaluate the performance of various multilingual pretrainedlanguage models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT)on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Ourcomprehensive analysis reveals that existing open-source instruction tuned LLMsstill struggle to understand SM across various languages, performing close to arandom baseline in some cases. We also find that although ChatGPT outperformsmany LLMs, it still falls behind task-specific finetuned models with a gap of12.19 SPARROW score. Our benchmark is available at:https://github.com/UBC-NLP/SPARROW",,arXiv,['cs.cl'],, program synthesis with large language models,"['Jacob Austin', 'Augustus Odena', 'Maxwell Nye', 'Maarten Bosma', 'Henryk Michalewski', 'David Dohan', 'Ellen Jiang', 'Carrie Cai', 'Michael Terry', 'Quoc Le', 'Charles Sutton']",http://arxiv.org/pdf/2108.07732v1.pdf,2021-08-16,," This paper explores the limits of the current generation of large languagemodels for program synthesis in general purpose programming languages. Weevaluate a collection of such models (with between 244M and 137B parameters) ontwo new benchmarks, MBPP and MathQA-Python, in both the few-shot andfine-tuning regimes. Our benchmarks are designed to measure the ability ofthese models to synthesize short Python programs from natural languagedescriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974programming tasks, designed to be solvable by entry-level programmers. TheMathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914problems that evaluate the ability of the models to synthesize code from morecomplex text. On both datasets, we find that synthesis performance scaleslog-linearly with model size. Our largest models, even without finetuning on acode dataset, can synthesize solutions to 59.6 percent of the problems fromMBPP using few-shot learning with a well-designed prompt. Fine-tuning on aheld-out portion of the dataset improves performance by about 10 percentagepoints across most model sizes. On the MathQA-Python dataset, the largestfine-tuned model achieves 83.8 percent accuracy. Going further, we study themodel's ability to engage in dialog about code, incorporating human feedback toimprove its solutions. We find that natural language feedback from a humanhalves the error rate compared to the model's initial prediction. Additionally,we conduct an error analysis to shed light on where these models fall short andwhat types of programs are most difficult to generate. Finally, we explore thesemantic grounding of these models by fine-tuning them to predict the resultsof program execution. We find that even our best models are generally unable topredict the output of a program given a specific input.",,arXiv,"['cs.pl', 'cs.lg']",, "a minimalist dataset for systematic generalization of perception, syntax, and semantics","['Qing Li', 'Siyuan Huang', 'Yining Hong', 'Yixin Zhu', 'Ying Nian Wu', 'Song-Chun Zhu']",http://arxiv.org/pdf/2103.01403v3.pdf,2021-03-02,," Inspired by humans' exceptional ability to master arithmetic and generalizeto new problems, we present a new dataset, Handwritten arithmetic with INTegers(HINT), to examine machines' capability of learning generalizable concepts atthree levels: perception, syntax, and semantics. In HINT, machines are taskedwith learning how concepts are perceived from raw signals such as images (i.e.,perception), how multiple concepts are structurally combined to form a validexpression (i.e., syntax), and how concepts are realized to afford variousreasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusingon systematic generalization, we carefully design a five-fold test set toevaluate both the interpolation and the extrapolation of learned conceptsw.r.t. the three levels. Further, we design a few-shot learning split todetermine whether or not models can rapidly learn new concepts and generalizethem to more complex scenarios. To comprehend existing models' limitations, weundertake extensive experiments with various sequence-to-sequence models,including RNNs, Transformers, and GPT-3 (with the chain of thought prompting).The results indicate that current models struggle to extrapolate to long-rangesyntactic dependency and semantics. Models exhibit a considerable gap towardhuman-level generalization when evaluated with new concepts in a few-shotsetting. Moreover, we discover that it is infeasible to solve HINT by merelyscaling up the dataset and the model size; this strategy contributes little tothe extrapolation of syntax and semantics. Finally, in zero-shot GPT-3experiments, the chain of thought prompting exhibits impressive results andsignificantly boosts the test accuracy. We believe the HINT dataset and theexperimental findings are of great interest to the learning community onsystematic generalization.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, large language models are zeroshot reasoners,"['Takeshi Kojima', 'Shixiang Shane Gu', 'Machel Reid', 'Yutaka Matsuo', 'Yusuke Iwasawa']",http://arxiv.org/pdf/2205.11916v4.pdf,2022-05-24,," Pretrained large language models (LLMs) are widely used in many sub-fields ofnatural language processing (NLP) and generally known as excellent few-shotlearners with task-specific exemplars. Notably, chain of thought (CoT)prompting, a recent technique for eliciting complex multi-step reasoningthrough step-by-step answer examples, achieved the state-of-the-artperformances in arithmetics and symbolic reasoning, difficult system-2 tasksthat do not follow the standard scaling laws for LLMs. While these successesare often attributed to LLMs' ability for few-shot learning, we show that LLMsare decent zero-shot reasoners by simply adding ""Let's think step by step""before each answer. Experimental results demonstrate that our Zero-shot-CoT,using the same single prompt template, significantly outperforms zero-shot LLMperformances on diverse benchmark reasoning tasks including arithmetics(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, CoinFlip), and other logical reasoning tasks (Date Understanding, Tracking ShuffledObjects), without any hand-crafted few-shot examples, e.g. increasing theaccuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% withlarge InstructGPT model (text-davinci-002), as well as similar magnitudes ofimprovements with another off-the-shelf large model, 540B parameter PaLM. Theversatility of this single prompt across very diverse reasoning tasks hints atuntapped and understudied fundamental zero-shot capabilities of LLMs,suggesting high-level, multi-task broad cognitive capabilities may be extractedby simple prompting. We hope our work not only serves as the minimal strongestzero-shot baseline for the challenging reasoning benchmarks, but alsohighlights the importance of carefully exploring and analyzing the enormouszero-shot knowledge hidden inside LLMs before crafting finetuning datasets orfew-shot exemplars.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, an empirical evaluation of using large language models for automated unit test generation,"['Max Schäfer', 'Sarah Nadi', 'Aryaz Eghbali', 'Frank Tip']",http://arxiv.org/pdf/2302.06527v4.pdf,2023-02-13,," Unit tests play a key role in ensuring the correctness of software. However,manually creating unit tests is a laborious task, motivating the need forautomation. Large Language Models (LLMs) have recently been applied to thisproblem, utilizing additional training or few-shot learning on examples ofexisting tests. This paper presents a large-scale empirical evaluation on theeffectiveness of LLMs for automated unit test generation without additionaltraining or manual effort, providing the LLM with the signature andimplementation of the function under test, along with usage examples extractedfrom documentation. We also attempt to repair failed generated tests byre-prompting the model with the failing test and error message. We implementour approach in TestPilot, a test generation tool for JavaScript thatautomatically generates unit tests for all API functions in an npm package. Weevaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with atotal of 1,684 API functions. The generated tests achieve a median statementcoverage of 70.2% and branch coverage of 52.8%, significantly improving onNessie, a recent feedback-directed JavaScript test generation technique, whichachieves only 51.3% statement coverage and 25.6% branch coverage. We also findthat 92.8% of TestPilot's generated tests have no more than 50% similarity withexisting tests (as measured by normalized edit distance), with none of thembeing exact copies. Finally, we run TestPilot with two additional LLMs,OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, weobserved similar results with the former (68.2% median statement coverage), andsomewhat worse results with the latter (54.0% median statement coverage),suggesting that the effectiveness of the approach is influenced by the size andtraining set of the LLM, but does not fundamentally depend on the specificmodel.",,arXiv,"['cs.se', 'cs.ai']",, on the opportunities and challenges of foundation models for geospatial artificial intelligence,"['Gengchen Mai', 'Weiming Huang', 'Jin Sun', 'Suhang Song', 'Deepak Mishra', 'Ninghao Liu', 'Song Gao', 'Tianming Liu', 'Gao Cong', 'Yingjie Hu', 'Chris Cundy', 'Ziyuan Li', 'Rui Zhu', 'Ni Lao']",http://arxiv.org/pdf/2304.06798v1.pdf,2023-04-13,," Large pre-trained models, also known as foundation models (FMs), are trainedin a task-agnostic manner on large-scale data and can be adapted to a widerange of downstream tasks by fine-tuning, few-shot, or even zero-shot learning.Despite their successes in language and vision tasks, we have yet seen anattempt to develop foundation models for geospatial artificial intelligence(GeoAI). In this work, we explore the promises and challenges of developingmultimodal foundation models for GeoAI. We first investigate the potential ofmany existing FMs by testing their performances on seven tasks across multiplegeospatial subdomains including Geospatial Semantics, Health Geography, UrbanGeography, and Remote Sensing. Our results indicate that on several geospatialtasks that only involve text modality such as toponym recognition, locationdescription recognition, and US state-level/county-level dementia time seriesforecasting, these task-agnostic LLMs can outperform task-specificfully-supervised models in a zero-shot or few-shot learning setting. However,on other geospatial tasks, especially tasks that involve multiple datamodalities (e.g., POI-based urban function classification, street viewimage-based urban noise intensity classification, and remote sensing imagescene classification), existing foundation models still underperformtask-specific models. Based on these observations, we propose that one of themajor challenges of developing a FM for GeoAI is to address the multimodalitynature of geospatial tasks. After discussing the distinct challenges of eachgeospatial data modality, we suggest the possibility of a multimodal foundationmodel which can reason over various types of geospatial data through geospatialalignments. We conclude this paper by discussing the unique risks andchallenges to develop such a model for GeoAI.",,arXiv,"['cs.ai', 'cs.cl', 'cs.cv', 'i.2.0; i.2.4; i.2.7; i.2.10; i.5.1']",, effective test generation using pretrained large language models and mutation testing,"['Arghavan Moradi Dakhel', 'Amin Nikanjam', 'Vahid Majdinasab', 'Foutse Khomh', 'Michel C. Desmarais']",http://arxiv.org/pdf/2308.16557v1.pdf,2023-08-31,," One of the critical phases in software development is software testing.Testing helps with identifying potential bugs and reducing maintenance costs.The goal of automated test generation tools is to ease the development of testsby suggesting efficient bug-revealing tests. Recently, researchers haveleveraged Large Language Models (LLMs) of code to generate unit tests. Whilethe code coverage of generated tests was usually assessed, the literature hasacknowledged that the coverage is weakly correlated with the efficiency oftests in bug detection. To improve over this limitation, in this paper, weintroduce MuTAP for improving the effectiveness of test cases generated by LLMsin terms of revealing bugs by leveraging mutation testing. Our goal is achievedby augmenting prompts with surviving mutants, as those mutants highlight thelimitations of test cases in detecting bugs. MuTAP is capable of generatingeffective test cases in the absence of natural language descriptions of theProgram Under Test (PUTs). We employ different LLMs within MuTAP and evaluatetheir performance on different benchmarks. Our results show that our proposedmethod is able to detect up to 28% more faulty human-written code snippets.Among these, 17% remained undetected by both the current state-of-the-art fullyautomated test generation tool (i.e., Pynguin) and zero-shot/few-shot learningapproaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57%on synthetic buggy code, outperforming all other approaches in our evaluation.Our findings suggest that although LLMs can serve as a useful tool to generatetest cases, they require specific post-processing steps to enhance theeffectiveness of the generated test cases which may suffer from syntactic orfunctional errors and may be ineffective in detecting certain types of bugs andtesting corner cases PUTs.",,arXiv,['cs.se'],, an evaluation of gpt models for phenotype concept recognition,"['Tudor Groza', 'Harry Caufield', 'Dylan Gration', 'Gareth Baynam', 'Melissa A Haendel', 'Peter N Robinson', 'Christopher J Mungall', 'Justin T Reese']",http://arxiv.org/pdf/2309.17169v2.pdf,2023-09-29,," Objective: Clinical deep phenotyping and phenotype annotation play a criticalrole in both the diagnosis of patients with rare disorders as well as inbuilding computationally-tractable knowledge in the rare disorders field. Theseprocesses rely on using ontology concepts, often from the Human PhenotypeOntology, in conjunction with a phenotype concept recognition task (supportedusually by machine learning methods) to curate patient profiles or existingscientific literature. With the significant shift in the use of large languagemodels (LLMs) for most NLP tasks, we examine the performance of the latestGenerative Pre-trained Transformer (GPT) models underpinning ChatGPT as afoundation for the tasks of clinical phenotyping and phenotype annotation.Materials and Methods: The experimental setup of the study included sevenprompts of various levels of specificity, two GPT models (gpt-3.5-turbo andgpt-4.0) and two established gold standard corpora for phenotype recognition,one consisting of publication abstracts and the other clinical observations.Results: Our results show that, with an appropriate setup, these models canachieve state of the art performance. The best run, using few-shot learning,achieved 0.58 macro F1 score on publication abstracts and 0.75 macro F1 scoreon clinical observations, the former being comparable with the state of theart, while the latter surpassing the current best in class tool. Conclusion:While the results are promising, the non-deterministic nature of the outcomes,the high cost and the lack of concordance between different runs using the sameprompt and input make the use of these LLMs challenging for this particulartask.",,arXiv,"['cs.cl', 'cs.ai']",, llm4sgg large language model for weakly supervised scene graph generation,"['Kibum Kim', 'Kanghoon Yoon', 'Jaehyeong Jeon', 'Yeonjun In', 'Jinyoung Moon', 'Donghyun Kim', 'Chanyoung Park']",http://arxiv.org/pdf/2310.10404v5.pdf,2023-10-16,," Weakly-Supervised Scene Graph Generation (WSSGG) research has recentlyemerged as an alternative to the fully-supervised approach that heavily relieson costly annotations. In this regard, studies on WSSGG have utilized imagecaptions to obtain unlocalized triplets while primarily focusing on groundingthe unlocalized triplets over image regions. However, they have overlooked thetwo issues involved in the triplet formation process from the captions: 1)Semantic over-simplification issue arises when extracting triplets fromcaptions, where fine-grained predicates in captions are undesirably convertedinto coarse-grained predicates, resulting in a long-tailed predicatedistribution, and 2) Low-density scene graph issue arises when aligning thetriplets in the caption with entity/predicate classes of interest, where manytriplets are discarded and not used in training, leading to insufficientsupervision. To tackle the two issues, we propose a new approach, i.e., LargeLanguage Model for weakly-supervised SGG (LLM4SGG), where we mitigate the twoissues by leveraging the LLM's in-depth understanding of language and reasoningability during the extraction of triplets from captions and alignment ofentity/predicate classes with target data. To further engage the LLM in theseprocesses, we adopt the idea of Chain-of-Thought and the in-context few-shotlearning strategy. To validate the effectiveness of LLM4SGG, we conductextensive experiments on Visual Genome and GQA datasets, showing significantimprovements in both Recall@K and mean Recall@K compared to thestate-of-the-art WSSGG methods. A further appeal is that LLM4SGG isdata-efficient, enabling effective model training with a small amount oftraining images.",,arXiv,['cs.cv'],, masakhanews news topic classification for african languages,"['David Ifeoluwa Adelani', 'Marek Masiak', 'Israel Abebe Azime', 'Jesujoba Alabi', 'Atnafu Lambebo Tonja', 'Christine Mwase', 'Odunayo Ogundepo', 'Bonaventure F. P. Dossou', 'Akintunde Oladipo', 'Doreen Nixdorf', 'Chris Chinenye Emezue', 'sana al-azzawi', 'Blessing Sibanda', 'Davis David', 'Lolwethu Ndolela', 'Jonathan Mukiibi', 'Tunde Ajayi', 'Tatiana Moteu', 'Brian Odhiambo', 'Abraham Owodunni', 'Nnaemeka Obiefuna', 'Muhidin Mohamed', 'Shamsuddeen Hassan Muhammad', 'Teshome Mulugeta Ababu', 'Saheed Abdullahi Salahudeen', 'Mesay Gemeda Yigezu', 'Tajuddeen Gwadabe', 'Idris Abdulmumin', 'Mahlet Taye', 'Oluwabusayo Awoyomi', 'Iyanuoluwa Shode', 'Tolulope Adelani', 'Habiba Abdulganiyu', 'Abdul-Hakeem Omotayo', 'Adetola Adeeko', 'Abeeb Afolabi', 'Anuoluwapo Aremu', 'Olanrewaju Samuel', 'Clemencia Siro', 'Wangari Kimotho', 'Onyekachi Ogbu', 'Chinedu Mbonu', 'Chiamaka Chukwuneke', 'Samuel Fanijo', 'Jessica Ojo', 'Oyinkansola Awosan', 'Tadesse Kebede', 'Toadoum Sari Sakayo', 'Pamela Nyatsine', 'Freedmore Sidume', 'Oreen Yousuf', 'Mardiyyah Oduwole', 'Tshinu Tshinu', 'Ussen Kimanuka', 'Thina Diko', 'Siyanda Nxakama', 'Sinodos Nigusse', 'Abdulmejid Johar', 'Shafie Mohamed', 'Fuad Mire Hassan', 'Moges Ahmed Mehamed', 'Evrard Ngabire', 'Jules Jules', 'Ivan Ssenkungu', 'Pontus Stenetorp']",http://arxiv.org/pdf/2304.09972v2.pdf,2023-04-19,," African languages are severely under-represented in NLP research due to lackof datasets covering several NLP tasks. While there are individual languagespecific datasets that are being expanded to different tasks, only a handful ofNLP tasks (e.g. named entity recognition and machine translation) havestandardized benchmark datasets covering several geographical andtypologically-diverse African languages. In this paper, we develop MasakhaNEWS-- a new benchmark dataset for news topic classification covering 16 languageswidely spoken in Africa. We provide an evaluation of baseline models bytraining classical machine learning models and fine-tuning several languagemodels. Furthermore, we explore several alternatives to full fine-tuning oflanguage models that are better suited for zero-shot and few-shot learning suchas cross-lingual parameter-efficient fine-tuning (like MAD-X), patternexploiting training (PET), prompting language models (like ChatGPT), andprompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).Our evaluation in zero-shot setting shows the potential of prompting ChatGPTfor news topic classification in low-resource African languages, achieving anaverage performance of 70 F1 points without leveraging additional supervisionlike MAD-X. In few-shot setting, we show that with as little as 10 examples perlabel, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance offull supervised training (92.6 F1 points) leveraging the PET approach.",,arXiv,['cs.cl'],, nspbert a promptbased fewshot learner through an original pretraining tasknext sentence prediction,"['Yi Sun', 'Yu Zheng', 'Chao Hao', 'Hangping Qiu']",http://arxiv.org/pdf/2109.03564v2.pdf,2021-09-08,," Using prompts to utilize language models to perform various downstream tasks,also known as prompt-based learning or prompt-learning, has lately gainedsignificant success in comparison to the pre-train and fine-tune paradigm.Nonetheless, virtually all prompt-based methods are token-level, meaning theyall utilize GPT's left-to-right language model or BERT's masked language modelto perform cloze-style tasks. In this paper, we attempt to accomplish severalNLP tasks in the zero-shot scenario using a BERT original pre-training taskabandoned by RoBERTa and other models--Next Sentence Prediction (NSP). Unliketoken-level techniques, our sentence-level prompt-based method NSP-BERT doesnot need to fix the length of the prompt or the position to be predicted,allowing it to handle tasks such as entity linking with ease. Based on thecharacteristics of NSP-BERT, we offer several quick building templates forvarious downstream tasks. We suggest a two-stage prompt method for word sensedisambiguation tasks in particular. Our strategies for mapping the labelssignificantly enhance the model's performance on sentence pair tasks. On theFewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods on most ofthese tasks and comes close to the few-shot methods.",,arXiv,"['cs.cl', 'cs.ai']",, introducing language guidance in promptbased continual learning,"['Muhammad Gul Zain Ali Khan', 'Muhammad Ferjad Naeem', 'Luc Van Gool', 'Didier Stricker', 'Federico Tombari', 'Muhammad Zeshan Afzal']",http://arxiv.org/pdf/2308.15827v1.pdf,2023-08-30,," Continual Learning aims to learn a single model on a sequence of taskswithout having access to data from previous tasks. The biggest challenge in thedomain still remains catastrophic forgetting: a loss in performance on seenclasses of earlier tasks. Some existing methods rely on an expensive replaybuffer to store a chunk of data from previous tasks. This, while promising,becomes expensive when the number of tasks becomes large or data can not bestored for privacy reasons. As an alternative, prompt-based methods have beenproposed that store the task information in a learnable prompt pool. Thisprompt pool instructs a frozen image encoder on how to solve each task. Whilethe model faces a disjoint set of classes in each task in this setting, weargue that these classes can be encoded to the same embedding space of apre-trained language encoder. In this work, we propose Language Guidance forPrompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods.LGCL is model agnostic and introduces language guidance at the task level inthe prompt pool and at the class level on the output feature of the visionencoder. We show with extensive experimentation that LGCL consistently improvesthe performance of prompt-based continual learning methods to set a newstate-of-the art. LGCL achieves these performance improvements without needingany additional learnable parameters.",,arXiv,['cs.cv'],, psg promptbased sequence generation for acronym extraction,"['Bin Li', 'Fei Xia', 'Yixuan Weng', 'Xiusheng Huang', 'Bin Sun', 'Shutao Li']",http://arxiv.org/pdf/2111.14301v2.pdf,2021-11-29,," Acronym extraction aims to find acronyms (i.e., short-forms) and theirmeanings (i.e., long-forms) from the documents, which is important forscientific document understanding (SDU@AAAI-22) tasks. Previous works aredevoted to modeling this task as a paragraph-level sequence labeling problem.However, it lacks the effective use of the external knowledge, especially whenthe datasets are in a low-resource setting. Recently, the prompt-based methodwith the vast pre-trained language model can significantly enhance theperformance of the low-resourced downstream tasks. In this paper, we propose aPrompt-based Sequence Generation (PSG) method for the acronym extraction task.Specifically, we design a template for prompting the extracted acronym textswith auto-regression. A position extraction algorithm is designed forextracting the position of the generated answers. The results on the acronymextraction of Vietnamese and Persian in a low-resource setting show that theproposed method outperforms all other competitive state-of-the-art (SOTA)methods.",,arXiv,"['cs.cl', 'cs.ai']",, chemical identification and indexing in pubmed articles via bert and texttotext approaches,"['Virginia Adams', 'Hoo-Chang Shin', 'Carol Anderson', 'Bo Liu', 'Anas Abidin']",http://arxiv.org/pdf/2111.15622v1.pdf,2021-11-30,," The Biocreative VII Track-2 challenge consists of named entity recognition,entity-linking (or entity-normalization), and topic indexing tasks -- withentities and topics limited to chemicals for this challenge. Named entityrecognition is a well-established problem and we achieve our best performancewith BERT-based BioMegatron models. We extend our BERT-based approach to theentity linking task. After the second stage of pretraining BioBERT with ametric-learning loss strategy called self-alignment pretraining (SAP), we linkentities based on the cosine similarity between their SAP-BioBERT wordembeddings. Despite the success of our named entity recognition experiments, wefind the chemical indexing task generally more challenging. In addition to conventional NER methods, we attempt both named entityrecognition and entity linking with a novel text-to-text or ""prompt"" basedmethod that uses generative language models such as T5 and GPT. We achieveencouraging results with this new approach.",,arXiv,['cs.cl'],, gpts at factify 2022 prompt aided factverification,"['Pawan Kumar Sahu', 'Saksham Aggarwal', 'Taneesh Gupta', 'Gyanendra Das']",http://arxiv.org/pdf/2206.14913v1.pdf,2022-06-29,," One of the most pressing societal issues is the fight against false news. Thefalse claims, as difficult as they are to expose, create a lot of damage. Totackle the problem, fact verification becomes crucial and thus has been a topicof interest among diverse research communities. Using only the textual form ofdata we propose our solution to the problem and achieve competitive resultswith other approaches. We present our solution based on two approaches - PLM(pre-trained language model) based method and Prompt based method. ThePLM-based approach uses the traditional supervised learning, where the model istrained to take 'x' as input and output prediction 'y' as P(y|x). Whereas,Prompt-based learning reflects the idea to design input to fit the model suchthat the original objective may be re-framed as a problem of (masked) languagemodeling. We may further stimulate the rich knowledge provided by PLMs tobetter serve downstream tasks by employing extra prompts to fine-tune PLMs. Ourexperiments showed that the proposed method performs better than justfine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset anda 7th position on the competition leader-board.",,arXiv,['cs.cl'],, quantifying language models' sensitivity to spurious features in prompt design or how i learned to start worrying about prompt formatting,"['Melanie Sclar', 'Yejin Choi', 'Yulia Tsvetkov', 'Alane Suhr']",http://arxiv.org/pdf/2310.11324v1.pdf,2023-10-17,," As large language models (LLMs) are adopted as a fundamental component oflanguage technologies, it is crucial to accurately characterize theirperformance. Because choices in prompt design can strongly influence modelbehavior, this design process is critical in effectively using any modernpre-trained generative language model. In this work, we focus on LLMsensitivity to a quintessential class of meaning-preserving design choices:prompt formatting. We find that several widely used open-source LLMs areextremely sensitive to subtle changes in prompt formatting in few-shotsettings, with performance differences of up to 76 accuracy points whenevaluated using LLaMA-2-13B. Sensitivity remains even when increasing modelsize, the number of few-shot examples, or performing instruction tuning. Ouranalysis suggests that work evaluating LLMs with prompting-based methods wouldbenefit from reporting a range of performance across plausible prompt formats,instead of the currently-standard practice of reporting performance on a singleformat. We also show that format performance only weakly correlates betweenmodels, which puts into question the methodological validity of comparingmodels with an arbitrarily chosen, fixed prompt format. To facilitatesystematic analysis we propose FormatSpread, an algorithm that rapidlyevaluates a sampled set of plausible prompt formats for a given task, andreports the interval of expected performance without accessing model weights.Furthermore, we present a suite of analyses that characterize the nature ofthis sensitivity, including exploring the influence of particular atomicperturbations and the internal representation of particular formats.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, gpt3driven pedagogical agents for training children's curious questionasking skills,"['Rania Abdelghani', 'Yen-Hsiang Wang', 'Xingdi Yuan', 'Tong Wang', 'Pauline Lucas', 'Hélène Sauzéon', 'Pierre-Yves Oudeyer']",http://arxiv.org/pdf/2211.14228v6.pdf,2022-11-25,," In order to train children's ability to ask curiosity-driven questions,previous research has explored designing specific exercises relying onproviding semantic and linguistic cues to help formulate such questions. Butdespite showing pedagogical efficiency, this method is still limited as itrelies on generating the said cues by hand, which can be a very costly process.In this context, we propose to leverage advances in the natural languageprocessing field (NLP) and investigate the efficiency of using a large languagemodel (LLM) for automating the production of the pedagogical content of acurious question-asking (QA) training. We study generating the said contentusing the ""prompt-based"" method that consists of explaining the task to the LLMin natural text. We evaluate the output using human experts annotations andcomparisons with hand-generated content. Results suggested indeed the relevanceand usefulness of this content. We also conduct a field study in primary school(75 children aged 9-10), where we evaluate children's QA performance whenhaving this training. We compare 3 types of content : 1) hand-generated contentthat proposes ""closed"" cues leading to predefined questions; 2) GPT-3-generatedcontent that proposes the same type of cues; 3) GPT-3-generated content thatproposes ""open"" cues leading to several possible questions. We see a similar QAperformance between the two ""closed"" trainings (showing the scalability of theapproach using GPT-3), and a better one for participants with the ""open""training. These results suggest the efficiency of using LLMs to supportchildren in generating more curious questions, using a natural languageprompting approach that affords usability by teachers and other users notspecialists of AI techniques. Furthermore, results also show that open-endedcontent may be more suitable for training curious question-asking skills.",,arXiv,"['cs.cl', 'cs.hc']",, mentalllm leveraging large language models for mental health prediction via online text data,"['Xuhai Xu', 'Bingsheng Yao', 'Yuanzhe Dong', 'Saadia Gabriel', 'Hong Yu', 'James Hendler', 'Marzyeh Ghassemi', 'Anind K. Dey', 'Dakuo Wang']",http://arxiv.org/pdf/2307.14385v4.pdf,2023-07-26,," Advances in large language models (LLMs) have empowered a variety ofapplications. However, there is still a significant gap in research when itcomes to understanding and enhancing the capabilities of LLMs in the field ofmental health. In this work, we present a comprehensive evaluation of multipleLLMs on various mental health prediction tasks via online text data, includingAlpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range ofexperiments, covering zero-shot prompting, few-shot prompting, and instructionfine-tuning. The results indicate a promising yet limited performance of LLMswith zero-shot and few-shot prompt designs for mental health tasks. Moreimportantly, our experiments show that instruction finetuning can significantlyboost the performance of LLMs for all tasks simultaneously. Our best-finetunedmodels, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design ofGPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best ofGPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with thestate-of-the-art task-specific language model. We also conduct an exploratorycase study on LLMs' capability on mental health reasoning tasks, illustratingthe promising capability of certain models such as GPT-4. We summarize ourfindings into a set of action guidelines for potential methods to enhance LLMs'capability for mental health tasks. Meanwhile, we also emphasize the importantlimitations before achieving deployability in real-world mental healthsettings, such as known racial and gender bias. We highlight the importantethical risks accompanying this line of research.",,arXiv,"['cs.cl', '68u35', 'h.5.2; i.2.m']",, towards zerolabel language learning,"['Zirui Wang', 'Adams Wei Yu', 'Orhan Firat', 'Yuan Cao']",http://arxiv.org/pdf/2109.09193v1.pdf,2021-09-19,," This paper explores zero-label learning in Natural Language Processing (NLP),whereby no human-annotated data is used anywhere during training and models aretrained purely on synthetic data. At the core of our framework is a novelapproach for better leveraging the powerful pretrained language models.Specifically, inspired by the recent success of few-shot inference on GPT-3, wepresent a training data creation procedure named Unsupervised Data Generation(UDG), which leverages few-shot prompts to synthesize high-quality trainingdata without real human annotations. Our method enables zero-label learning aswe train task-specific models solely on the synthetic data, yet we achievebetter or comparable results from strong baseline models trained onhuman-labeled data. Furthermore, when mixed with labeled data, our approachserves as a highly effective data augmentation procedure, achieving newstate-of-the-art results on the SuperGLUE benchmark.",,arXiv,"['cs.cl', 'cs.lg']",, covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds,"['Keshav Kolluru', 'Gabriel Stanovsky', ' Mausam']",http://arxiv.org/pdf/2210.13039v1.pdf,2022-10-24,," Proper noun compounds, e.g., ""Covid vaccine"", convey information in asuccinct manner (a ""Covid vaccine"" is a ""vaccine that immunizes against theCovid disease""). These are commonly used in short-form domains, such as newsheadlines, but are largely ignored in information-seeking applications. Toaddress this limitation, we release a new manually annotated dataset, ProNCI,consisting of 22.5K proper noun compounds along with their free-form semanticinterpretations. ProNCI is 60 times larger than prior noun compound datasetsand also includes non-compositional examples, which have not been previouslyexplored. We experiment with various neural models for automatically generatingthe semantic interpretations from proper noun compounds, ranging from few-shotprompting to supervised learning, with varying degrees of knowledge about theconstituent nouns. We find that adding targeted knowledge, particularly aboutthe common noun, results in performance gains of upto 2.8%. Finally, weintegrate our model generated interpretations with an existing Open IE systemand observe an 7.5% increase in yield at a precision of 85%. The dataset andcode are available at https://github.com/dair-iitd/pronci.",,arXiv,['cs.cl'],, visualizing linguistic diversity of text datasets synthesized by large language models,"['Emily Reif', 'Minsuk Kahng', 'Savvas Petridis']",http://arxiv.org/pdf/2305.11364v2.pdf,2023-05-19,," Large language models (LLMs) can be used to generate smaller, more refineddatasets via few-shot prompting for benchmarking, fine-tuning or other usecases. However, understanding and evaluating these datasets is difficult, andthe failure modes of LLM-generated data are still not well understood.Specifically, the data can be repetitive in surprising ways, not onlysemantically but also syntactically and lexically. We present LinguisticLens, anovel inter-active visualization tool for making sense of and analyzingsyntactic diversity of LLM-generated datasets. LinguisticLens clusters textalong syntactic, lexical, and semantic axes. It supports hierarchicalvisualization of a text dataset, allowing users to quickly scan for an overviewand inspect individual examples. The live demo is available atshorturl.at/zHOUV.",,arXiv,"['cs.cl', 'cs.ai']",, summqa at mediqachat 2023incontext learning with gpt4 for medical summarization,"['Yash Mathur', 'Sanketh Rangreji', 'Raghav Kapoor', 'Medha Palavalli', 'Amanda Bertsch', 'Matthew R. Gormley']",http://arxiv.org/pdf/2306.17384v1.pdf,2023-06-30,," Medical dialogue summarization is challenging due to the unstructured natureof medical conversations, the use of medical terminology in gold summaries, andthe need to identify key information across multiple symptom sets. We present anovel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA2023 Shared Task. Our approach for section-wise summarization (Task A) is atwo-stage process of selecting semantically similar dialogues and using thetop-k similar dialogues as in-context examples for GPT-4. For full-notesummarization (Task B), we use a similar solution with k=1. We achieved 3rdplace in Task A (2nd among all teams), 4th place in Task B Division WiseSummarization (2nd among all teams), 15th place in Task A Section HeaderClassification (9th among all teams), and 8th place among all teams in Task B.Our results highlight the effectiveness of few-shot prompting for this task,though we also identify several weaknesses of prompting-based approaches. Wecompare GPT-4 performance with several finetuned baselines. We find that GPT-4summaries are more abstractive and shorter. We make our code publiclyavailable.",,arXiv,['cs.cl'],, ecologically valid explanations for label variation in nli,"['Nan-Jiang Jiang', 'Chenhao Tan', 'Marie-Catherine de Marneffe']",http://arxiv.org/pdf/2310.13850v1.pdf,2023-10-20,," Human label variation, or annotation disagreement, exists in many naturallanguage processing (NLP) tasks, including natural language inference (NLI). Togain direct evidence of how NLI label variation arises, we build LiveNLI, anEnglish dataset of 1,415 ecologically valid explanations (annotators explainthe NLI labels they chose) for 122 MNLI items (at least 10 explanations peritem). The LiveNLI explanations confirm that people can systematically vary ontheir interpretation and highlight within-label variation: annotators sometimeschoose the same label for different reasons. This suggests that explanationsare crucial for navigating label interpretations in general. We few-shot promptlarge language models to generate explanations but the results areinconsistent: they sometimes produces valid and informative explanations, butit also generates implausible ones that do not support the label, highlightingdirections for improvement.",,arXiv,['cs.cl'],, apiassisted code generation for question answering on varied table structures,"['Yihan Cao', 'Shuyi Chen', 'Ryan Liu', 'Zhiruo Wang', 'Daniel Fried']",http://arxiv.org/pdf/2310.14687v1.pdf,2023-10-23,," A persistent challenge to table question answering (TableQA) by generatingexecutable programs has been adapting to varied table structures, typicallyrequiring domain-specific logical forms. In response, this paper introduces aunified TableQA framework that: (1) provides a unified representation forstructured tables as multi-index Pandas data frames, (2) uses Python as apowerful querying language, and (3) uses few-shot prompting to translate NLquestions into Python programs, which are executable on Pandas data frames.Furthermore, to answer complex relational questions with extended programfunctionality and external knowledge, our framework allows customized APIs thatPython programs can call. We experiment with four TableQA datasets that involvetables of different structures -- relational, multi-table, and hierarchicalmatrix shapes -- and achieve prominent improvements over past state-of-the-artsystems. In ablation studies, we (1) show benefits from our multi-indexrepresentation and APIs over baselines that use only an LLM, and (2)demonstrate that our approach is modular and can incorporate additional APIs.",,arXiv,"['cs.cl', 'cs.ai']",, tree of clarifications answering ambiguous questions with retrievalaugmented large language models,"['Gangwoo Kim', 'Sungdong Kim', 'Byeongguk Jeon', 'Joonsuk Park', 'Jaewoo Kang']",http://arxiv.org/pdf/2310.14696v1.pdf,2023-10-23,," Questions in open-domain question answering are often ambiguous, allowingmultiple interpretations. One approach to handling them is to identify allpossible interpretations of the ambiguous question (AQ) and to generate along-form answer addressing them all, as suggested by Stelmakh et al., (2022).While it provides a comprehensive response without bothering the user forclarification, considering multiple dimensions of ambiguity and gatheringcorresponding knowledge remains a challenge. To cope with the challenge, wepropose a novel framework, Tree of Clarifications (ToC): It recursivelyconstructs a tree of disambiguations for the AQ -- via few-shot promptingleveraging external knowledge -- and uses it to generate a long-form answer.ToC outperforms existing baselines on ASQA in a few-shot setup across themetrics, while surpassing fully-supervised baselines trained on the wholetraining set in terms of Disambig-F1 and Disambig-ROUGE. Code is available athttps://github.com/gankim/tree-of-clarifications.",,arXiv,['cs.cl'],, dissecting incontext learning of translations in gpts,"['Vikas Raunak', 'Hany Hassan Awadalla', 'Arul Menezes']",http://arxiv.org/pdf/2310.15987v1.pdf,2023-10-24,," Most of the recent work in leveraging Large Language Models (LLMs) such asGPT-3 for Machine Translation (MT) has focused on selecting the few-shotsamples for prompting. In this work, we try to better understand the role ofdemonstration attributes for the in-context learning of translations throughperturbations of high-quality, in-domain demonstrations. We find thatasymmetric perturbation of the source-target mappings yield vastly differentresults. We show that the perturbation of the source side has surprisinglylittle impact, while target perturbation can drastically reduce translationquality, suggesting that it is the output text distribution that provides themost important learning signal during in-context learning of translations. Wepropose a method named Zero-Shot-Context to add this signal automatically inZero-Shot prompting. We demonstrate that it improves upon the zero-shottranslation performance of GPT-3, even making it competitive with few-shotprompted translations.",,arXiv,"['cs.cl', 'cs.ai']",, extraction of atypical aspects from customer reviews datasets and experiments with language models,"['Smita Nannaware', 'Erfan Al-Hossami', 'Razvan Bunescu']",http://arxiv.org/pdf/2311.02702v1.pdf,2023-11-05,," A restaurant dinner may become a memorable experience due to an unexpectedaspect enjoyed by the customer, such as an origami-making station in thewaiting area. If aspects that are atypical for a restaurant experience wereknown in advance, they could be leveraged to make recommendations that have thepotential to engender serendipitous experiences, further increasing usersatisfaction. Although relatively rare, whenever encountered, atypical aspectsoften end up being mentioned in reviews due to their memorable quality.Correspondingly, in this paper we introduce the task of detecting atypicalaspects in customer reviews. To facilitate the development of extractionmodels, we manually annotate benchmark datasets of reviews in three domains -restaurants, hotels, and hair salons, which we use to evaluate a number oflanguage models, ranging from fine-tuning the instruction-based text-to-texttransformer Flan-T5 to zero-shot and few-shot prompting of GPT-3.5.",,arXiv,"['cs.cl', 'cs.ai']",, sqlprompt incontext texttosql with minimal labeled data,"['Ruoxi Sun', 'Sercan Ö. Arik', 'Rajarishi Sinha', 'Hootan Nakhost', 'Hanjun Dai', 'Pengcheng Yin', 'Tomas Pfister']",http://arxiv.org/pdf/2311.02883v1.pdf,2023-11-06,," Text-to-SQL aims to automate the process of generating SQL queries on adatabase from natural language text. In this work, we propose ""SQLPrompt"",tailored to improve the few-shot prompting capabilities of Text-to-SQL forLarge Language Models (LLMs). Our methods include innovative prompt design,execution-based consistency decoding strategy which selects the SQL with themost consistent execution outcome among other SQL proposals, and a method thataims to improve performance by diversifying the SQL proposals duringconsistency selection with different prompt designs (""MixPrompt"") andfoundation models (""MixLLMs""). We show that \emph{SQLPrompt} outperformsprevious approaches for in-context learning with few labeled data by a largemargin, closing the gap with finetuning state-of-the-art with thousands oflabeled data.",,arXiv,['cs.cl'],, jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue,"['Lena Reed', 'Cecilia Li', 'Angela Ramirez', 'Liren Wu', 'Marilyn Walker']",http://arxiv.org/pdf/2110.08094v2.pdf,2021-10-15,," One challenge with open-domain dialogue systems is the need to producetruthful, high-quality responses on any topic. We aim to improve the qualityand coverage of Athena, an Alexa Prize dialogue system. We experiment withfew-shot prompt-based learning, comparing GPT-Neo to Jurassic-1, for themovies, music, TV, sports, and video game domains, both within andcross-domain, with different prompt set sizes (2, 3, 10), formats, and meaningrepresentations consisting of either sets of WikiData KG triples, or dialogueacts. Our evaluation uses BLEURT and human metrics, and shows that with 10-shotprompting, Athena-Jurassic's performance is significantly better for coherenceand semantic accuracy. Experiments with 2-shot cross-domain prompts results ina huge performance drop for Athena-GPT-Neo, whose semantic accuracy falls to0.41, and whose untrue hallucination rate increases to 12%. Experiments withdialogue acts for video games show that with 10-shot prompting, both modelslearn to control dialogue acts, but Athena-Jurassic has significantly highercoherence, and only 4% untrue hallucinations. Our results suggest thatAthena-Jurassic produces high enough quality outputs to be useful in livesystems with real users. To our knowledge, these are the first resultsdemonstrating that few-shot semantic prompt-based learning can create NLGs thatgeneralize to new domains, and produce high-quality, semantically-controlled,conversational responses directly from meaning representations.",,arXiv,['cs.cl'],, codelmsec benchmark systematically evaluating and finding security vulnerabilities in blackbox code language models,"['Hossein Hajipour', 'Keno Hassler', 'Thorsten Holz', 'Lea Schönherr', 'Mario Fritz']",http://arxiv.org/pdf/2302.04012v2.pdf,2023-02-08,," Large language models (LLMs) for automatic code generation have achievedbreakthroughs in several programming tasks. Their advances in competition-levelprogramming problems have made them an essential pillar of AI-assisted pairprogramming, and tools such as GitHub Copilot have emerged as part of the dailyprogramming workflow used by millions of developers. The training data forthese models is usually collected from the Internet (e.g., from open-sourcerepositories) and is likely to contain faults and security vulnerabilities.This unsanitized training data can cause the language models to learn thesevulnerabilities and propagate them during the code generation procedure. Whilethese models have been extensively assessed for their ability to producefunctionally correct programs, there remains a lack of comprehensiveinvestigations and benchmarks addressing the security aspects of these models. In this work, we propose a method to systematically study the security issuesof code language models to assess their susceptibility to generating vulnerablecode. To this end, we introduce the first approach to automatically findgenerated code that contains vulnerabilities in black-box code generationmodels. To achieve this, we present an approach to approximate inversion of theblack-box code generation models based on few-shot prompting. We evaluate theeffectiveness of our approach by examining code language models in generatinghigh-risk security weaknesses. Furthermore, we establish a collection ofdiverse non-secure prompts for various vulnerability scenarios using ourmethod. This dataset forms a benchmark for evaluating and comparing thesecurity weaknesses in code language models.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.se']",, scifix outperforming gpt3 on scientific factual error correction,"['Dhananjay Ashok', 'Atharva Kulkarni', 'Hai Pham', 'Barnabás Póczos']",http://arxiv.org/pdf/2305.14707v2.pdf,2023-05-24,," Due to the prohibitively high cost of creating error correction datasets,most Factual Claim Correction methods rely on a powerful verification model toguide the correction process. This leads to a significant drop in performancein domains like scientific claims, where good verification models do not alwaysexist. In this work, we introduce SciFix, a scientific claim correction systemthat does not require a verifier but can outperform existing methods by aconsiderable margin -- achieving correction accuracy of 84% on the SciFactdataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to nextbest accuracies of 7%, 5%, and 15% on the same datasets respectively. Ourmethod leverages the power of prompting with LLMs during training to create arichly annotated dataset that can be used for fully supervised training andregularization. We additionally use a claim-aware decoding procedure to improvethe quality of corrected claims. Our method outperforms the very LLM that wasused to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5achieving 58%, 61%, and 64% on the respective datasets, a consistently lowercorrection accuracy, despite using nearly 800 times as many parameters as ourmodel.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, diffender diffusionbased adversarial defense against patch attacks,"['Caixin Kang', 'Yinpeng Dong', 'Zhengyi Wang', 'Shouwei Ruan', 'Yubo Chen', 'Hang Su', 'Xingxing Wei']",http://arxiv.org/pdf/2306.09124v3.pdf,2023-06-15,," Adversarial attacks, particularly patch attacks, pose significant threats tothe robustness and reliability of deep learning models. Developing reliabledefenses against patch attacks is crucial for real-world applications, yetcurrent research in this area is unsatisfactory. In this paper, we proposeDIFFender, a novel defense method that leverages a text-guided diffusion modelto defend against adversarial patches. DIFFender includes two main stages:patch localization and patch restoration. In the localization stage, we findand exploit an intriguing property of the diffusion model to precisely identifythe locations of adversarial patches. In the restoration stage, we employ thediffusion model to reconstruct the adversarial regions in the images whilepreserving the integrity of the visual content. Thanks to the former finding,these two stages can be simultaneously guided by a unified diffusion model.Thus, we can utilize the close interaction between them to improve the wholedefense performance. Moreover, we propose a few-shot prompt-tuning algorithm tofine-tune the diffusion model, enabling the pre-trained diffusion model toadapt to the defense task easily. We conduct extensive experiments on imageclassification, face recognition, and further in the physical world,demonstrating that our proposed method exhibits superior robustness understrong adaptive attacks and generalizes well across various scenarios, diverseclassifiers, and multiple patch attack methods.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cr', 'cs.lg']",, steering large language models for machine translation with finetuning and incontext learning,"['Duarte M. Alves', 'Nuno M. Guerreiro', 'João Alves', 'José Pombal', 'Ricardo Rei', 'José G. C. de Souza', 'Pierre Colombo', 'André F. T. Martins']",http://arxiv.org/pdf/2310.13448v1.pdf,2023-10-20,," Large language models (LLMs) are a promising avenue for machine translation(MT). However, current LLM-based MT systems are brittle: their effectivenesshighly depends on the choice of few-shot examples and they often require extrapost-processing due to overgeneration. Alternatives such as finetuning ontranslation instructions are computationally expensive and may weakenin-context learning capabilities, due to overspecialization. In this paper, weprovide a closer look at this problem. We start by showing that adapter-basedfinetuning with LoRA matches the performance of traditional finetuning whilereducing the number of training parameters by a factor of 50. This method alsooutperforms few-shot prompting and eliminates the need for post-processing orin-context examples. However, we show that finetuning generally degradesfew-shot performance, hindering adaptation capabilities. Finally, to obtain thebest of both worlds, we propose a simple approach that incorporates few-shotexamples during finetuning. Experiments on 10 language pairs show that ourproposed approach recovers the original few-shot capabilities while keeping theadded benefits of finetuning.",,arXiv,['cs.cl'],, an early evaluation of gpt4v(ision),"['Yang Wu', 'Shilong Wang', 'Hao Yang', 'Tian Zheng', 'Hongbo Zhang', 'Yanyan Zhao', 'Bing Qin']",http://arxiv.org/pdf/2310.16534v1.pdf,2023-10-25,," In this paper, we evaluate different abilities of GPT-4V including visualunderstanding, language understanding, visual puzzle solving, and understandingof other modalities such as depth, thermal, video, and audio. To estimateGPT-4V's performance, we manually construct 656 test instances and carefullyevaluate the results of GPT-4V. The highlights of our findings are as follows:(1) GPT-4V exhibits impressive performance on English visual-centric benchmarksbut fails to recognize simple Chinese texts in the images; (2) GPT-4V showsinconsistent refusal behavior when answering questions related to sensitivetraits such as gender, race, and age; (3) GPT-4V obtains worse results thanGPT-4 (API) on language understanding tasks including general languageunderstanding benchmarks and visual commonsense knowledge evaluationbenchmarks; (4) Few-shot prompting can improve GPT-4V's performance on bothvisual understanding and language understanding; (5) GPT-4V struggles to findthe nuances between two similar images and solve the easy math picture puzzles;(6) GPT-4V shows non-trivial performance on the tasks of similar modalities toimage, such as video and thermal. Our experimental results reveal the abilityand limitations of GPT-4V and we hope our paper can provide some insights intothe application and research of GPT-4V.",,arXiv,"['cs.cl', 'cs.cv']",, you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation,"['Allyson Ettinger', 'Jena D. Hwang', 'Valentina Pyatkin', 'Chandra Bhagavatula', 'Yejin Choi']",http://arxiv.org/pdf/2310.17793v2.pdf,2023-10-26,," Large language models (LLMs) show amazing proficiency and fluency in the useof language. Does this mean that they have also acquired insightful linguisticknowledge about the language, to an extent that they can serve as an ""expertlinguistic annotator""? In this paper, we examine the successes and limitationsof the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaningstructure, focusing on the Abstract Meaning Representation (AMR; Banarescu etal. 2013) parsing formalism, which provides rich graphical representations ofsentence meaning structure while abstracting away from surface forms. Wecompare models' analysis of this semantic structure across two settings: 1)direct production of AMR parses based on zero- and few-shot prompts, and 2)indirect partial reconstruction of AMR via metalinguistic natural languagequeries (e.g., ""Identify the primary event of this sentence, and the predicatecorresponding to that event.""). Across these settings, we find that models canreliably reproduce the basic format of AMR, and can often capture core event,argument, and modifier structure -- however, model outputs are prone tofrequent and major errors, and holistic analysis of parse acceptability showsthat even with few-shot demonstrations, models have virtually 0% success inproducing fully accurate parses. Eliciting natural language responses producessimilar patterns of errors. Overall, our findings indicate that these modelsout-of-the-box can capture aspects of semantic structure, but there remain keylimitations in their ability to support fully accurate semantic analyses orparses.",,arXiv,"['cs.cl', 'cs.ai']",, styleaware radiology report generation with radgraph and fewshot prompting,"['Benjamin Yan', 'Ruochen Liu', 'David E. Kuo', 'Subathra Adithan', 'Eduardo Pontes Reis', 'Stephen Kwak', 'Vasantha Kumar Venugopal', ""Chloe P. O'Connell"", 'Agustina Saenz', 'Pranav Rajpurkar', 'Michael Moor']",http://arxiv.org/pdf/2310.17811v2.pdf,2023-10-26,," Automatically generated reports from medical images promise to improve theworkflow of radiologists. Existing methods consider an image-to-report modelingtask by directly generating a fully-fledged report from an image. However, thisconflates the content of the report (e.g., findings and their attributes) withits style (e.g., format and choice of words), which can lead to clinicallyinaccurate reports. To address this, we propose a two-step approach forradiology report generation. First, we extract the content from an image; then,we verbalize the extracted content into a report that matches the style of aspecific radiologist. For this, we leverage RadGraph -- a graph representationof reports -- together with large language models (LLMs). In our quantitativeevaluations, we find that our approach leads to beneficial performance. Ourhuman evaluation with clinical raters highlights that the AI-generated reportsare indistinguishably tailored to the style of individual radiologist despiteleveraging only a few examples as context.",,arXiv,"['cs.ai', 'cs.cl']",, mentallama interpretable mental health analysis on social media with large language models,"['Kailai Yang', 'Tianlin Zhang', 'Ziyan Kuang', 'Qianqian Xie', 'Jimin Huang', 'Sophia Ananiadou']",http://arxiv.org/pdf/2309.13567v3.pdf,2023-09-24,," With the development of web technology, social media texts are becoming arich source for automatic mental health analysis. As traditional discriminativemethods bear the problem of low interpretability, the recent large languagemodels have been explored for interpretable mental health analysis on socialmedia, which aims to provide detailed explanations along with predictions. Theresults show that ChatGPT can generate approaching-human explanations for itscorrect classifications. However, LLMs still achieve unsatisfactoryclassification performance in a zero-shot/few-shot manner. Domain-specificfinetuning is an effective solution, but faces 2 challenges: 1) lack ofhigh-quality training data. 2) no open-source LLMs for interpretable mentalhealth analysis were released to lower the finetuning cost. To alleviate theseproblems, we build the first multi-task and multi-source interpretable mentalhealth instruction (IMHI) dataset on social media, with 105K data samples. Theraw social media data are collected from 10 existing sources covering 8 mentalhealth analysis tasks. We use expert-written few-shot prompts and collectedlabels to prompt ChatGPT and obtain explanations from its responses. To ensurethe reliability of the explanations, we perform strict automatic and humanevaluations on the correctness, consistency, and quality of generated data.Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA,the first open-source LLM series for interpretable mental health analysis withinstruction-following capability. We also evaluate the performance ofMentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where theircorrectness for making predictions and the quality of explanations areexamined. The results show that MentalLLaMA approaches state-of-the-artdiscriminative methods in correctness and generates high-quality explanations.",,arXiv,['cs.cl'],, acecoder utilizing existing code to enhance code generation,"['Jia Li', 'Yunfei Zhao', 'Yongmin Li', 'Ge Li', 'Zhi Jin']",http://arxiv.org/pdf/2303.17780v3.pdf,2023-03-31,," Large Language Models (LLMs) have shown great success in code generation.LLMs take as the input a prompt and output the code. A key question is how tomake prompts (i.e., Prompting Techniques). Existing prompting techniques aredesigned for natural language generation and have low accuracy in codegeneration. In this paper, we propose a new prompting technique named AceCoder. Ourmotivation is that code generation meets two unique challenges (i.e.,requirement understanding and code implementation). AceCoder contains two novelmechanisms (i.e., guided code generation and example retrieval) to solve thesechallenges. (1) Guided code generation asks LLMs first to analyze requirementsand output an intermediate preliminary (e.g., test cases). The preliminary isused to clarify requirements and tell LLMs ""what to write"". (2) Exampleretrieval selects similar programs as examples in prompts, which provide lotsof relevant content (e.g., algorithms, APIs) and teach LLMs ""how to write"". Weapply AceCoder to three LLMs (e.g., Codex) and evaluate it on three publicbenchmarks using the Pass@k. Results show that AceCoder can significantlyimprove the performance of LLMs on code generation. (1) In terms of Pass@1,AceCoder outperforms the state-of-the-art baseline by up to 56.4% in MBPP,70.7% in MBJP, and 88.4% in MBJSP. (2) AceCoder is effective in LLMs withdifferent sizes (i.e., 6B to 13B) and different languages (i.e., Python, Java,and JavaScript). (3) Human evaluation shows human developers prefer programsfrom AceCoder.",,arXiv,"['cs.se', 'cs.ai']",, compositional semantic parsing with large language models,"['Andrew Drozdov', 'Nathanael Schärli', 'Ekin Akyürek', 'Nathan Scales', 'Xinying Song', 'Xinyun Chen', 'Olivier Bousquet', 'Denny Zhou']",http://arxiv.org/pdf/2209.15003v2.pdf,2022-09-29,," Humans can reason compositionally when presented with new tasks. Previousresearch shows that appropriate prompting techniques enable large languagemodels (LLMs) to solve artificial compositional generalization tasks such asSCAN. In this work, we identify additional challenges in more realisticsemantic parsing tasks with larger vocabulary and refine these promptingtechniques to address them. Our best method is based on least-to-mostprompting: it decomposes the problem using prompting-based syntactic parsing,then uses this decomposition to select appropriate exemplars and tosequentially generate the semantic parse. This method allows us to set a newstate of the art for CFQ while requiring only 1% of the training data used bytraditional approaches. Due to the general nature of our approach, we expectsimilar efforts will lead to new results in other tasks and domains, especiallyfor knowledge-intensive applications.",,arXiv,"['cs.cl', 'cs.ai']",, gembamqm detecting translation quality error spans with gpt4,"['Tom Kocmi', 'Christian Federmann']",http://arxiv.org/pdf/2310.13988v1.pdf,2023-10-21,," This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed todetect translation quality errors, specifically for the quality estimationsetting without the need for human reference translations. Based on the powerof large language models (LLM), GEMBA-MQM employs a fixed three-shot promptingtechnique, querying the GPT-4 model to mark error quality spans. Compared toprevious works, our method has language-agnostic prompts, thus avoiding theneed for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-artaccuracy for system ranking, we advise caution when using it in academic worksto demonstrate improvements over other methods due to its dependence on theproprietary, black-box GPT model.",,arXiv,['cs.cl'],, eliciting topic hierarchies from large language models,"['Grace Li', 'Tao Long', 'Lydia B. Chilton']",http://arxiv.org/pdf/2310.19275v1.pdf,2023-10-30,," Finding topics to write about can be a mentally demanding process. However,topic hierarchies can help writers explore topics of varying levels ofspecificity. In this paper, we use large language models (LLMs) to helpconstruct topic hierarchies. Although LLMs have access to such knowledge, itcan be difficult to elicit due to issues of specificity, scope, and repetition.We designed and tested three different prompting techniques to find one thatmaximized accuracy. We found that prepending the general topic area to a promptyielded the most accurate results with 85% accuracy. We discuss applications ofthis research including STEM writing, education, and content creation.",,arXiv,['cs.hc'],, structured chainofthought prompting for code generation,"['Jia Li', 'Ge Li', 'Yongmin Li', 'Zhi Jin']",http://arxiv.org/pdf/2305.06599v3.pdf,2023-05-11,," Large Language Models (LLMs) (e.g., ChatGPT) have shown impressiveperformance in code generation. LLMs take prompts as inputs, andChain-of-Thought (CoT) prompting is the state-of-the-art prompting technique.CoT prompting asks LLMs first to generate CoTs (i.e., intermediate naturallanguage reasoning steps) and then output the code. However, CoT prompting isdesigned for natural language generation and has low accuracy in codegeneration. In this paper, we propose Structured CoTs (SCoTs) and present a novelprompting technique for code generation, named SCoT prompting. Our motivationis source code contains rich structural information and any code can becomposed of three program structures (i.e., sequence, branch, and loopstructures). Intuitively, structured intermediate reasoning steps make forstructured source code. Thus, we ask LLMs to use program structures to buildCoTs, obtaining SCoTs. Then, LLMs generate the final code based on SCoTs.Compared to CoT prompting, SCoT prompting explicitly constrains LLMs to thinkabout how to solve requirements from the view of source code and further theperformance of LLMs in code generation. We apply SCoT prompting to two LLMs(i.e., ChatGPT and Codex) and evaluate it on three benchmarks (i.e., HumanEval,MBPP, and MBCPP). (1) SCoT prompting outperforms the state-of-the-art baseline- CoT prompting by up to 13.79% in Pass@1. (2) Human evaluation shows humandevelopers prefer programs from SCoT prompting. (3) SCoT prompting is robust toexamples and achieves substantial improvements.",,arXiv,"['cs.se', 'cs.cl']",, languagespecific representation of emotionconcept knowledge causally supports emotion inference,"['Ming Li', 'Yusheng Su', 'Hsiu-Yuan Huang', 'Jiali Cheng', 'Xin Hu', 'Xinmiao Zhang', 'Huadong Wang', 'Yujia Qin', 'Xiaozhi Wang', 'Zhiyuan Liu', 'Dan Zhang']",http://arxiv.org/pdf/2302.09582v4.pdf,2023-02-19,," Understanding how language supports emotion inference remains a topic ofdebate in emotion science. The present study investigated whetherlanguage-derived emotion-concept knowledge would causally support emotioninference by manipulating the language-specific knowledge representations inlarge language models. Using the prompt technique, 14 attributes of emotionconcepts were found to be represented by distinct artificial neuronpopulations. By manipulating these attribute-related neurons, the majority ofthe emotion inference tasks showed performance deterioration compared to randommanipulations. The attribute-specific performance deterioration was related tothe importance of different attributes in human mental space. Our findingsprovide causal evidence in support of a language-based mechanism for emotioninference and highlight the contributions of emotion-concept knowledge.",,arXiv,"['cs.ai', 'cs.cl']",, posqa probe the world models of llms with size comparisons,"['Chang Shu', 'Jiuzhou Han', 'Fangyu Liu', 'Ehsan Shareghi', 'Nigel Collier']",http://arxiv.org/pdf/2310.13394v1.pdf,2023-10-20,," Embodied language comprehension emphasizes that language understanding is notsolely a matter of mental processing in the brain but also involvesinteractions with the physical and social environment. With the explosivegrowth of Large Language Models (LLMs) and their already ubiquitous presence inour daily lives, it is becoming increasingly necessary to verify theirreal-world understanding. Inspired by cognitive theories, we propose POSQA: aPhysical Object Size Question Answering dataset with simple size comparisonquestions to examine the extremity and analyze the potential mechanisms of theembodied comprehension of the latest LLMs. We show that even the largest LLMs today perform poorly under the zero-shotsetting. We then push their limits with advanced prompting techniques andexternal knowledge augmentation. Furthermore, we investigate whether theirreal-world comprehension primarily derives from contextual information orinternal weights and analyse the impact of prompt formats and report bias ofdifferent objects. Our results show that real-world understanding that LLMsshaped from textual data can be vulnerable to deception and confusion by thesurface form of prompts, which makes it less aligned with human behaviours.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, musr testing the limits of chainofthought with multistep soft reasoning,"['Zayne Sprague', 'Xi Ye', 'Kaj Bostrom', 'Swarat Chaudhuri', 'Greg Durrett']",http://arxiv.org/pdf/2310.16049v1.pdf,2023-10-24,," While large language models (LLMs) equipped with techniques likechain-of-thought prompting have demonstrated impressive capabilities, theystill fall short in their ability to reason robustly in complex settings.However, evaluating LLM reasoning is challenging because system capabilitiescontinue to grow while benchmark datasets for tasks like logical deduction haveremained static. We introduce MuSR, a dataset for evaluating language models onmultistep soft reasoning tasks specified in a natural language narrative. Thisdataset has two crucial features. First, it is created through a novelneurosymbolic synthetic-to-natural generation algorithm, enabling theconstruction of complex reasoning instances that challenge GPT-4 (e.g., murdermysteries roughly 1000 words in length) and which can be scaled further as morecapable LLMs are released. Second, our dataset instances are free textnarratives corresponding to real-world domains of reasoning; this makes itsimultaneously much more challenging than other synthetically-craftedbenchmarks while remaining realistic and tractable for human annotators tosolve with high accuracy. We evaluate a range of LLMs and prompting techniqueson this dataset and characterize the gaps that remain for techniques likechain-of-thought to perform robust reasoning.",,arXiv,['cs.cl'],, little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task,"['Neema Kotonya', 'Saran Krishnasamy', 'Joel Tetreault', 'Alejandro Jaimes']",http://arxiv.org/pdf/2311.00686v1.pdf,2023-11-01,," This paper describes and analyzes our participation in the 2023 Eval4NLPshared task, which focuses on assessing the effectiveness of prompt-basedtechniques to empower Large Language Models to handle the task of qualityestimation, particularly in the context of evaluating machine translations andsummaries. We conducted systematic experiments with various promptingtechniques, including standard prompting, prompts informed by annotatorinstructions, and innovative chain-of-thought prompting. In addition, weintegrated these approaches with zero-shot and one-shot learning methods tomaximize the efficacy of our evaluation procedures. Our work reveals thatcombining these approaches using a ""small"", open source model (orca_mini_v3_7B)yields competitive results.",,arXiv,['cs.cl'],, can large language models design accurate label functions,"['Naiqing Guan', 'Kaiwen Chen', 'Nick Koudas']",http://arxiv.org/pdf/2311.00739v1.pdf,2023-11-01,," Programmatic weak supervision methodologies facilitate the expedited labelingof extensive datasets through the use of label functions (LFs) that encapsulateheuristic data sources. Nonetheless, the creation of precise LFs necessitatesdomain expertise and substantial endeavors. Recent advances in pre-trainedlanguage models (PLMs) have exhibited substantial potential across diversetasks. However, the capacity of PLMs to autonomously formulate accurate LFsremains an underexplored domain. In this research, we address this gap byintroducing DataSculpt, an interactive framework that harnesses PLMs for theautomated generation of LFs. Within DataSculpt, we incorporate an array ofprompting techniques, instance selection strategies, and LF filtration methodsto explore the expansive design landscape. Ultimately, we conduct a thoroughassessment of DataSculpt's performance on 12 real-world datasets, encompassinga range of tasks. This evaluation unveils both the strengths and limitations ofcontemporary PLMs in LF design.",,arXiv,"['cs.cl', 'cs.db', 'cs.lg', 'h.2.8; i.5.4']",, once boosting contentbased recommendation with both open and closedsource large language models,"['Qijiong Liu', 'Nuo Chen', 'Tetsuya Sakai', 'Xiao-Ming Wu']",http://arxiv.org/pdf/2305.06566v4.pdf,2023-05-11,," Personalized content-based recommender systems have become indispensabletools for users to navigate through the vast amount of content available onplatforms like daily news websites and book recommendation services. However,existing recommenders face significant challenges in understanding the contentof items. Large language models (LLMs), which possess deep semanticcomprehension and extensive knowledge from pretraining, have proven to beeffective in various natural language processing tasks. In this study, weexplore the potential of leveraging both open- and closed-source LLMs toenhance content-based recommendation. With open-source LLMs, we utilize theirdeep layers as content encoders, enriching the representation of content at theembedding level. For closed-source LLMs, we employ prompting techniques toenrich the training data at the token level. Through comprehensive experiments,we demonstrate the high effectiveness of both types of LLMs and show thesynergistic relationship between them. Notably, we observed a significantrelative improvement of up to 19.32% compared to existing state-of-the-artrecommendation models. These findings highlight the immense potential of bothopen- and closed-source of LLMs in enhancing content-based recommendationsystems. We will make our code and LLM-generated data available for otherresearchers to reproduce our results.",,arXiv,"['cs.ir', 'cs.cl']",, crosslingual prompting improving zeroshot chainofthought reasoning across languages,"['Libo Qin', 'Qiguang Chen', 'Fuxuan Wei', 'Shijue Huang', 'Wanxiang Che']",http://arxiv.org/pdf/2310.14799v1.pdf,2023-10-23,," Chain-of-thought (CoT) is capable of eliciting models to explicitly generatereasoning paths, thus promoting reasoning accuracy and attracting increasingattention. Specifically, zero-shot CoT achieves remarkable improvements in awide range of reasoning tasks by simply instructing the LLM with the prompt""Let's think step by step!"". Despite the success of zero-shot CoT, the existingzero-shot prompting techniques remain limited to a single language, making itchallenging to generalize to other languages and hindering global development.In this work, we introduce cross-lingual prompting (CLP), aiming to improvezero-shot CoT reasoning across languages. Specifically, CLP consists of twomain components: (1) cross-lingual alignment prompting and (2) task-specificsolver prompting. The cross-lingual alignment prompting is responsible foraligning representations across different languages, whereas the task-specificsolver prompting is used to generate the final chain of thoughts and resultsfor the reasoning task. In addition, we further introduce cross-lingualself-consistent prompting (CLSP) to ensemble different reasoning paths acrosslanguages. Our experimental evaluations on several benchmarks demonstrate thatCLP and CLSP significantly outperform the existing prompting methods andachieve state-of-the-art performance. We hope this work will inspire furtherbreakthroughs in cross-lingual CoT.",,arXiv,"['cs.cl', 'cs.ai']",, hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks,"['Yihong Ma', 'Ning Yan', 'Jiayu Li', 'Masood Mortazavi', 'Nitesh V. Chawla']",http://arxiv.org/pdf/2310.15318v3.pdf,2023-10-23,," Graphs have emerged as a natural choice to represent and analyze theintricate patterns and rich information of the Web, enabling applications suchas online page classification and social recommendation. The prevailing""pre-train, fine-tune"" paradigm has been widely adopted in graph machinelearning tasks, particularly in scenarios with limited labeled nodes. However,this approach often exhibits a misalignment between the training objectives ofpretext tasks and those of downstream tasks. This gap can result in the""negative transfer"" problem, wherein the knowledge gained from pre-trainingadversely affects performance in the downstream tasks. The surge inprompt-based learning within Natural Language Processing (NLP) suggests thepotential of adapting a ""pre-train, prompt"" paradigm to graphs as analternative. However, existing graph prompting techniques are tailored tohomogeneous graphs, neglecting the inherent heterogeneity of Web graphs. Tobridge this gap, we propose HetGPT, a general post-training prompting frameworkto improve the predictive performance of pre-trained heterogeneous graph neuralnetworks (HGNNs). The key is the design of a novel prompting function thatintegrates a virtual class prompt and a heterogeneous feature prompt, with theaim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPTintroduces a multi-view neighborhood aggregation mechanism, capturing thecomplex neighborhood structure in heterogeneous graphs. Extensive experimentson three benchmark datasets demonstrate HetGPT's capability to enhance theperformance of state-of-the-art HGNNs on semi-supervised node classification.",,arXiv,"['cs.lg', 'cs.ai']",, llm4dyg can large language models solve problems on dynamic graphs,"['Zeyang Zhang', 'Xin Wang', 'Ziwei Zhang', 'Haoyang Li', 'Yijian Qin', 'Simin Wu', 'Wenwu Zhu']",http://arxiv.org/pdf/2310.17110v1.pdf,2023-10-26,," In an era marked by the increasing adoption of Large Language Models (LLMs)for various tasks, there is a growing focus on exploring LLMs' capabilities inhandling web data, particularly graph data. Dynamic graphs, which capturetemporal network evolution patterns, are ubiquitous in real-world web data.Evaluating LLMs' competence in understanding spatial-temporal information ondynamic graphs is essential for their adoption in web applications, whichremains unexplored in the literature. In this paper, we bridge the gap viaproposing to evaluate LLMs' spatial-temporal understanding abilities on dynamicgraphs, to the best of our knowledge, for the first time. Specifically, wepropose the LLM4DyG benchmark, which includes nine specially designed tasksconsidering the capability evaluation of LLMs from both temporal and spatialdimensions. Then, we conduct extensive experiments to analyze the impacts ofdifferent data generators, data statistics, prompting techniques, and LLMs onthe model performance. Finally, we propose Disentangled Spatial-TemporalThoughts (DST2) for LLMs on dynamic graphs to enhance LLMs' spatial-temporalunderstanding abilities. Our main observations are: 1) LLMs have preliminaryspatial-temporal understanding abilities on dynamic graphs, 2) Dynamic graphtasks show increasing difficulties for LLMs as the graph size and densityincrease, while not sensitive to the time span and data generation mechanism,3) the proposed DST2 prompting method can help to improve LLMs'spatial-temporal understanding abilities on dynamic graphs for most tasks. Thedata and codes will be open-sourced at publication time.",,arXiv,['cs.lg'],, which is better exploring prompting strategy for llmbased metrics,"['Joonghoon Kim', 'Saeran Park', 'Kiyoon Jeong', 'Sangmin Lee', 'Seung Hun Han', 'Jiyoon Lee', 'Pilsung Kang']",http://arxiv.org/pdf/2311.03754v1.pdf,2023-11-07,," This paper describes the DSBA submissions to the Prompting Large LanguageModels as Explainable Metrics shared task, where systems were submitted to twotracks: small and large summarization tracks. With advanced Large LanguageModels (LLMs) such as GPT-4, evaluating the quality of Natural LanguageGeneration (NLG) has become increasingly paramount. Traditionalsimilarity-based metrics such as BLEU and ROUGE have shown to misalign withhuman evaluation and are ill-suited for open-ended generation tasks. To addressthis issue, we explore the potential capability of LLM-based metrics,especially leveraging open-source LLMs. In this study, wide range of promptsand prompting techniques are systematically analyzed with three approaches:prompting strategy, score aggregation, and explainability. Our research focuseson formulating effective prompt templates, determining the granularity of NLGquality scores and assessing the impact of in-context examples on LLM-basedevaluation. Furthermore, three aggregation strategies are compared to identifythe most reliable method for aggregating NLG quality scores. To examineexplainability, we devise a strategy that generates rationales for the scoresand analyzes the characteristics of the explanation produced by the open-sourceLLMs. Extensive experiments provide insights regarding evaluation capabilitiesof open-source LLMs and suggest effective prompting strategies.",,arXiv,['cs.cl'],, autonomous treesearch ability of large language models,"['Zheyu Zhang', 'Zhuorui Ye', 'Yikang Shen', 'Chuang Gan']",http://arxiv.org/pdf/2310.10686v1.pdf,2023-10-14,," Large Language Models have excelled in remarkable reasoning capabilities withadvanced prompting techniques, but they fall short on tasks that requireexploration, strategic foresight, and sequential decision-making. Recent workspropose to utilize external programs to define search logic, such that LLMs canperform passive tree search to solve more challenging reasoning tasks. Thoughimpressive results have been achieved, there are several fundamentallimitations of these approaches. First, passive tree searches are not efficientas they usually require multiple rounds of LLM API calls to solve one singleproblem. Moreover, passive search methods are not flexible since they needtask-specific program designs. Then a natural question arises: can we maintainthe tree-search capability of LLMs without the aid of external programs, andcan still generate responses that clearly demonstrate the process of atree-structure search? To this end, we propose a new concept called autonomoustree-search ability of LLM, which can automatically generate a responsecontaining search trajectories for the correct answer. Concretely, we performsearch trajectories using capable LLM API via a fixed system prompt, allowingthem to perform autonomous tree-search (ATS) right out of the box. Experimentson 4 puzzle games demonstrate our method can achieve huge improvements. TheATS-BFS method outperforms the Chain of Thought approach by achieving anaverage accuracy improvement of 33%. Compared to Tree of Thoughts, it requires65.6% or 47.7% less GPT-api cost to attain a comparable level of accuracy.Moreover, we have collected data using the ATS prompt method and fine-tunedLLaMA. This approach yield a greater improvement compared to the onesfine-tuned on CoT data. Specifically, it outperforms CoT-tuned LLaMAs by anaverage of 40.6% and 38.5% for LLaMA2-7B and LLaMA2-13B, respectively.",,arXiv,"['cs.cl', 'cs.ai']",, s$^3$hqa a threestage approach for multihop texttable hybrid question answering,"['Fangyu Lei', 'Xiang Li', 'Yifan Wei', 'Shizhu He', 'Yiming Huang', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2305.11725v1.pdf,2023-05-19,," Answering multi-hop questions over hybrid factual knowledge from the giventext and table (TextTableQA) is a challenging task. Existing models mainlyadopt a retriever-reader framework, which have several deficiencies, such asnoisy labeling in training retriever, insufficient utilization of heterogeneousinformation over text and table, and deficient ability for different reasoningoperations. In this paper, we propose a three-stage TextTableQA frameworkS3HQA, which comprises of retriever, selector, and reasoner. We use a retrieverwith refinement training to solve the noisy labeling problem. Then, a hybridselector considers the linked relationships between heterogeneous data toselect the most relevant factual knowledge. For the final stage, instead ofadapting a reading comprehension module like in previous methods, we employ ageneration-based reasoner to obtain answers. This includes two approaches: arow-wise generator and an LLM prompting generator~(first time used in thistask). The experimental results demonstrate that our method achievescompetitive results in the few-shot setting. When trained on the full dataset,our approach outperforms all baseline methods, ranking first on the HybridQAleaderboard.",,arXiv,['cs.cl'],, a mlllm pairing for better code comment classification,['Hanna Abi Akl'],http://arxiv.org/pdf/2310.10275v1.pdf,2023-10-13,," The ""Information Retrieval in Software Engineering (IRSE)"" at FIRE 2023shared task introduces code comment classification, a challenging task thatpairs a code snippet with a comment that should be evaluated as either usefulor not useful to the understanding of the relevant code. We answer the codecomment classification shared task challenge by providing a two-foldevaluation: from an algorithmic perspective, we compare the performance ofclassical machine learning systems and complement our evaluations from adata-driven perspective by generating additional data with the help of largelanguage model (LLM) prompting to measure the potential increase inperformance. Our best model, which took second place in the shared task, is aNeural Network with a Macro-F1 score of 88.401% on the provided seed data and a1.5% overall increase in performance on the data generated by the LLM.",,arXiv,"['cs.se', 'cs.ai']",, multistage large language model correction for speech recognition,"['Jie Pu', 'Thai-Son Nguyen', 'Sebastian Stüker']",http://arxiv.org/pdf/2310.11532v1.pdf,2023-10-17,," In this paper, we investigate the usage of large language models (LLMs) toimprove the performance of competitive speech recognition systems. Differentfrom traditional language models that focus on one single data domain, the riseof LLMs brings us the opportunity to push the limit of state-of-the-art ASRperformance, and at the same time to achieve higher robustness and generalizeeffectively across multiple domains. Motivated by this, we propose a novelmulti-stage approach to combine traditional language model re-scoring and LLMprompting. Specifically, the proposed method has two stages: the first stageuses a language model to re-score an N-best list of ASR hypotheses and run aconfidence check; The second stage uses prompts to a LLM to perform ASR errorcorrection on less confident results from the first stage. Our experimentalresults demonstrate the effectiveness of the proposed method by showing a 10% ~20% relative improvement in WER over a competitive ASR system -- acrossmultiple test domains.",,arXiv,"['cs.cl', 'eess.as']",, omnifill domainagnostic form filling suggestions using multifaceted context,"['Timothy J. Aveni', 'Armando Fox', 'Björn Hartmann']",http://arxiv.org/pdf/2310.17826v1.pdf,2023-10-27,," Predictive suggestion systems offer contextually-relevant text entrycompletions. Existing approaches, like autofill, often excel innarrowly-defined domains but fail to generalize to arbitrary workflows. Weintroduce a conceptual framework to analyze the compound demands of aparticular suggestion context, yielding unique opportunities for large languagemodels (LLMs) to infer suggestions for a wide range of domain-agnosticform-filling tasks that were out of reach with prior approaches. We explorethese opportunities in OmniFill, a prototype that collects multi-facetedcontext including browsing and text entry activity to construct an LLM promptthat offers suggestions in situ for arbitrary structured text entry interfaces.Through a user study with 18 participants, we found that OmniFill offeredvaluable suggestions and we identified four themes that characterize users'behavior and attitudes: an ""opportunistic scrapbooking"" approach; a trustplaced in the system; value in partial success; and a need for visibility intoprompt context.",,arXiv,['cs.hc'],, knowledgeinfused prompting assessing and advancing clinical text data generation with large language models,"['Ran Xu', 'Hejie Cui', 'Yue Yu', 'Xuan Kan', 'Wenqi Shi', 'Yuchen Zhuang', 'Wei Jin', 'Joyce Ho', 'Carl Yang']",http://arxiv.org/pdf/2311.00287v1.pdf,2023-11-01,," Clinical natural language processing requires methods that can addressdomain-specific challenges, such as complex medical terminology and clinicalcontexts. Recently, large language models (LLMs) have shown promise in thisdomain. Yet, their direct deployment can lead to privacy issues and areconstrained by resources. To address this challenge, we delve into syntheticclinical text generation using LLMs for clinical NLP tasks. We propose aninnovative, resource-efficient approach, ClinGen, which infuses knowledge intothe process. Our model involves clinical knowledge extraction andcontext-informed LLM prompting. Both clinical topics and writing styles aredrawn from external domain-specific knowledge graphs and LLMs to guide datageneration. Our extensive empirical study across 7 clinical NLP tasks and 16datasets reveals that ClinGen consistently enhances performance across varioustasks, effectively aligning the distribution of real datasets and significantlyenriching the diversity of generated training instances. We will publish ourcode and all the generated data in \url{https://github.com/ritaranx/ClinGen}.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",, fewshot reranking for multihop qa via language model prompting,"['Muhammad Khalifa', 'Lajanugen Logeswaran', 'Moontae Lee', 'Honglak Lee', 'Lu Wang']",http://arxiv.org/pdf/2205.12650v3.pdf,2022-05-25,," We study few-shot reranking for multi-hop QA with open-domain questions. Toalleviate the need for a large number of labeled question-document pairs forretriever training, we propose PromptRank, which relies on large languagemodels prompting for multi-hop path reranking. PromptRank first constructs aninstruction-based prompt that includes a candidate document path and thencomputes the relevance score between a given question and the path based on theconditional likelihood of the question given the path prompt according to alanguage model. PromptRank yields strong retrieval performance on HotpotQA withonly 128 training examples compared to state-of-the-art methods trained onthousands of examples -- 73.6 recall@10 by PromptRank vs. 77.8 by PathRetrieverand 77.5 by multi-hop dense retrieval. Code available athttps://github.com/mukhal/PromptRank",,arXiv,"['cs.cl', 'cs.ir']",, metaincontext learning in large language models,"['Julian Coda-Forno', 'Marcel Binz', 'Zeynep Akata', 'Matthew Botvinick', 'Jane X. Wang', 'Eric Schulz']",http://arxiv.org/pdf/2305.12907v1.pdf,2023-05-22,," Large language models have shown tremendous performance in a variety oftasks. In-context learning -- the ability to improve at a task after beingprovided with a number of demonstrations -- is seen as one of the maincontributors to their success. In the present paper, we demonstrate that thein-context learning abilities of large language models can be recursivelyimproved via in-context learning itself. We coin this phenomenonmeta-in-context learning. Looking at two idealized domains, a one-dimensionalregression task and a two-armed bandit task, we show that meta-in-contextlearning adaptively reshapes a large language model's priors over expectedtasks. Furthermore, we find that meta-in-context learning modifies thein-context learning strategies of such models. Finally, we extend our approachto a benchmark of real-world regression problems where we observe competitiveperformance to traditional learning algorithms. Taken together, our workimproves our understanding of in-context learning and paves the way towardadapting large language models to the environment they are applied purelythrough meta-in-context learning rather than traditional finetuning.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, metavl transferring incontext learning ability from language models to visionlanguage models,"['Masoud Monajatipoor', 'Liunian Harold Li', 'Mozhdeh Rouhsedaghat', 'Lin F. Yang', 'Kai-Wei Chang']",http://arxiv.org/pdf/2306.01311v1.pdf,2023-06-02,," Large-scale language models have shown the ability to adapt to a new task viaconditioning on a few demonstrations (i.e., in-context learning). However, inthe vision-language domain, most large-scale pre-trained vision-language (VL)models do not possess the ability to conduct in-context learning. How can weenable in-context learning for VL models? In this paper, we study aninteresting hypothesis: can we transfer the in-context learning ability fromthe language domain to VL domain? Specifically, we first meta-trains a languagemodel to perform in-context learning on NLP tasks (as in MetaICL); then wetransfer this model to perform VL tasks by attaching a visual encoder. Ourexperiments suggest that indeed in-context learning ability can be transferredcross modalities: our model considerably improves the in-context learningcapability on VL tasks and can even compensate for the size of the modelsignificantly. On VQA, OK-VQA, and GQA, our method could outperform thebaseline model while having 20 times fewer parameters.",,arXiv,['cs.cl'],, an explanation of incontext learning as implicit bayesian inference,"['Sang Michael Xie', 'Aditi Raghunathan', 'Percy Liang', 'Tengyu Ma']",http://arxiv.org/pdf/2111.02080v6.pdf,2021-11-03,," Large language models (LMs) such as GPT-3 have the surprising ability to doin-context learning, where the model learns to do a downstream task simply byconditioning on a prompt consisting of input-output examples. The LM learnsfrom these examples without being explicitly pretrained to learn. Thus, it isunclear what enables in-context learning. In this paper, we study howin-context learning can emerge when pretraining documents have long-rangecoherence. Here, the LM must infer a latent document-level concept to generatecoherent next tokens during pretraining. At test time, in-context learningoccurs when the LM also infers a shared latent concept between examples in aprompt. We prove when this occurs despite a distribution mismatch betweenprompts and pretraining data in a setting where the pretraining distribution isa mixture of HMMs. In contrast to messy large-scale datasets used to train LMscapable of in-context learning, we generate a small-scale synthetic dataset(GINC) where Transformers and LSTMs both exhibit in-context learning. Beyondthe theory, experiments on GINC exhibit large-scale real-world phenomenaincluding improved in-context performance with model scaling (despite the samepretraining loss), sensitivity to example order, and instances where zero-shotis better than few-shot in-context learning.",,arXiv,"['cs.cl', 'cs.lg']",, rethinking the role of scale for incontext learning an interpretabilitybased case study at 66 billion scale,"['Hritik Bansal', 'Karthik Gopalakrishnan', 'Saket Dingliwal', 'Sravan Bodapati', 'Katrin Kirchhoff', 'Dan Roth']",http://arxiv.org/pdf/2212.09095v2.pdf,2022-12-18,," Language models have been shown to perform better with an increase in scaleon a wide variety of tasks via the in-context learning paradigm. In this paper,we investigate the hypothesis that the ability of a large language model toin-context learn-perform a task is not uniformly spread across all of itsunderlying components. Using a 66 billion parameter language model (OPT-66B)across a diverse set of 14 downstream tasks, we find this is indeed the case:$\sim$70% of attention heads and $\sim$20% of feed forward networks can beremoved with minimal decline in task performance. We find substantial overlapin the set of attention heads (un)important for in-context learning acrosstasks and number of in-context examples. We also address our hypothesis througha task-agnostic lens, finding that a small set of attention heads in OPT-66Bscore highly on their ability to perform primitive induction operationsassociated with in-context learning, namely, prefix matching and copying. Theseinduction heads overlap with task-specific important heads, reinforcingarguments by Olsson et al. (arXiv:2209.11895) regarding induction headgenerality to more sophisticated behaviors associated with in-context learning.Overall, our study provides several insights that indicate large languagemodels may be under-trained for in-context learning and opens up questions onhow to pre-train language models to more effectively perform in-contextlearning.",,arXiv,"['cs.cl', 'cs.ai']",, a closer look at incontext learning under distribution shifts,"['Kartik Ahuja', 'David Lopez-Paz']",http://arxiv.org/pdf/2305.16704v1.pdf,2023-05-26,," In-context learning, a capability that enables a model to learn from inputexamples on the fly without necessitating weight updates, is a definingcharacteristic of large language models. In this work, we follow the settingproposed in (Garg et al., 2022) to better understand the generality andlimitations of in-context learning from the lens of the simple yet fundamentaltask of linear regression. The key question we aim to address is: Aretransformers more adept than some natural and simpler architectures atperforming in-context learning under varying distribution shifts? To comparetransformers, we propose to use a simple architecture based on set-basedMulti-Layer Perceptrons (MLPs). We find that both transformers and set-basedMLPs exhibit in-context learning under in-distribution evaluations, buttransformers more closely emulate the performance of ordinary least squares(OLS). Transformers also display better resilience to mild distribution shifts,where set-based MLPs falter. However, under severe distribution shifts, bothmodels' in-context learning abilities diminish.",,arXiv,"['cs.lg', 'stat.ml']",, exploring the relationship between model architecture and incontext learning ability,"['Ivan Lee', 'Nan Jiang', 'Taylor Berg-Kirkpatrick']",http://arxiv.org/pdf/2310.08049v2.pdf,2023-10-12,," What is the relationship between model architecture and the ability toperform in-context learning? In this empirical study, we take the first stepstoward answering this question. We evaluate twelve model architectures capableof causal language modeling across a suite of synthetic in-context learningtasks. These selected architectures represent a broad range of paradigms,including recurrent and convolution-based neural networks, transformers,state-space model inspired, and other emerging attention alternatives. Wediscover that all the considered architectures can perform in-context learningunder a wider range of conditions than previously documented. Additionally, weobserve stark differences in statistical efficiency and consistency by varyingcontext length and task difficulty. We also measure each architecture'spredisposition towards in-context learning when presented with alternativeroutes for task resolution. Finally, and somewhat surprisingly, we find thatseveral attention alternatives are more robust in-context learners thantransformers. Given that such approaches have constant-sized memory footprintsat inference time, this result opens the possibility of scaling up in-contextlearning to accommodate vastly larger numbers of in-context examples.",,arXiv,['cs.lg'],, what can transformers learn incontext a case study of simple function classes,"['Shivam Garg', 'Dimitris Tsipras', 'Percy Liang', 'Gregory Valiant']",http://arxiv.org/pdf/2208.01066v3.pdf,2022-08-01,," In-context learning refers to the ability of a model to condition on a promptsequence consisting of in-context examples (input-output pairs corresponding tosome task) along with a new query input, and generate the corresponding output.Crucially, in-context learning happens only at inference time without anyparameter updates to the model. While large language models such as GPT-3exhibit some ability to perform in-context learning, it is unclear what therelationship is between tasks on which this succeeds and what is present in thetraining data. To make progress towards understanding in-context learning, weconsider the well-defined problem of training a model to in-context learn afunction class (e.g., linear functions): that is, given data derived from somefunctions in the class, can we train a model to in-context learn ""most""functions from this class? We show empirically that standard Transformers canbe trained from scratch to perform in-context learning of linear functions --that is, the trained model is able to learn unseen linear functions fromin-context examples with performance comparable to the optimal least squaresestimator. In fact, in-context learning is possible even under two forms ofdistribution shift: (i) between the training data of the model andinference-time prompts, and (ii) between the in-context examples and the queryinput during inference. We also show that we can train Transformers toin-context learn more complex function classes -- namely sparse linearfunctions, two-layer neural networks, and decision trees -- with performancethat matches or exceeds task-specific learning algorithms. Our code and modelsare available at https://github.com/dtsip/in-context-learning .",,arXiv,"['cs.cl', 'cs.lg']",, "structured prompting scaling incontext learning to 1,000 examples","['Yaru Hao', 'Yutao Sun', 'Li Dong', 'Zhixiong Han', 'Yuxian Gu', 'Furu Wei']",http://arxiv.org/pdf/2212.06713v1.pdf,2022-12-13,," Large language models have exhibited intriguing in-context learningcapability, achieving promising zero- and few-shot performance without updatingthe parameters. However, conventional in-context learning is usually restrictedby length constraints, rendering it ineffective to absorb supervision from alarge number of examples. In order to go beyond few shots, we introducestructured prompting that breaks the length limit and scales in-contextlearning to thousands of examples. Specifically, demonstration examples areseparately encoded with well-designed position embeddings, and then they arejointly attended by the test example using a rescaled attention mechanism. Sowe can scale the number of exemplars with linear complexity instead ofquadratic complexity with respect to length. Experimental results on a diverseset of tasks show that our approach improves end-task performance and reducesevaluation variance over conventional in-context learning as the number ofdemonstration examples increases. Code has been released athttps://aka.ms/structured-prompting.",,arXiv,['cs.cl'],, pretraining to learn in context,"['Yuxian Gu', 'Li Dong', 'Furu Wei', 'Minlie Huang']",http://arxiv.org/pdf/2305.09137v1.pdf,2023-05-16,," In-context learning, where pre-trained language models learn to perform tasksfrom task examples and instructions in their contexts, has attracted muchattention in the NLP community. However, the ability of in-context learning isnot fully exploited because language models are not explicitly trained to learnin context. To this end, we propose PICL (Pre-training for In-ContextLearning), a framework to enhance the language models' in-context learningability by pre-training the model on a large collection of ""intrinsic tasks"" inthe general plain-text corpus using the simple language modeling objective.PICL encourages the model to infer and perform tasks by conditioning on thecontexts while maintaining task generalization of pre-trained models. Weevaluate the in-context learning performance of the model trained with PICL onseven widely-used text classification datasets and the Super-NaturalInstrctionsbenchmark, which contains 100+ NLP tasks formulated to text generation. Ourexperiments show that PICL is more effective and task-generalizable than arange of baselines, outperforming larger language models with nearly 4xparameters. The code is publicly available at https://github.com/thu-coai/PICL.",,arXiv,['cs.cl'],, exnet efficient incontext learning for dataless text classification,"['Debaditya Shome', 'Kuldeep Yadav']",http://arxiv.org/pdf/2305.14622v1.pdf,2023-05-24,," Large pre-trained language models (PLMs) have made significant progress inencoding world knowledge and spawned a new set of learning paradigms includingzero-shot, few-shot, and in-context learning. Many language tasks can bemodeled as a set of prompts (for example, is this text about geography?) andlanguage models can provide binary answers, i.e., Yes or No. There is evidenceto suggest that the next-word prediction used by many PLMs does not align wellwith zero-shot paradigms. Therefore, PLMs are fine-tuned as aquestion-answering system. In-context learning extends zero-shot learning byincorporating prompts and examples, resulting in increased task accuracy. Ourpaper presents EXnet, a model specifically designed to perform in-contextlearning without any limitations on the number of examples. We argue thatin-context learning is an effective method to increase task accuracy, andproviding examples facilitates cross-task generalization, especially when itcomes to text classification tasks. With extensive experiments, we show thateven our smallest model (15M parameters) generalizes to several unseenclassification tasks and domains.",,arXiv,"['cs.cl', 'cs.lg']",, raven incontext learning with retrieval augmented encoderdecoder language models,"['Jie Huang', 'Wei Ping', 'Peng Xu', 'Mohammad Shoeybi', 'Kevin Chen-Chuan Chang', 'Bryan Catanzaro']",http://arxiv.org/pdf/2308.07922v1.pdf,2023-08-15,," In this paper, we investigate the in-context learning ability ofretrieval-augmented encoder-decoder language models. We first conduct acomprehensive analysis of the state-of-the-art ATLAS model and identify itslimitations in in-context learning, primarily due to a mismatch betweenpretraining and testing, as well as a restricted context length. To addressthese issues, we propose RAVEN, a model that combines retrieval-augmentedmasked language modeling and prefix language modeling. We further introduceFusion-in-Context Learning to enhance the few-shot performance by enabling themodel to leverage more in-context examples without requiring additionaltraining or model modifications. Through extensive experiments, we demonstratethat RAVEN significantly outperforms ATLAS and achieves results comparable tothe most advanced language models in certain scenarios, despite havingsubstantially fewer parameters. Our work underscores the potential ofretrieval-augmented encoder-decoder language models for in-context learning andencourages further research in this direction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, incontext learning dynamics with random binary sequences,"['Eric J. Bigelow', 'Ekdeep Singh Lubana', 'Robert P. Dick', 'Hidenori Tanaka', 'Tomer D. Ullman']",http://arxiv.org/pdf/2310.17639v2.pdf,2023-10-26,," Large language models (LLMs) trained on huge corpora of text datasetsdemonstrate intriguing capabilities, achieving state-of-the-art performance ontasks they were not explicitly trained for. The precise nature of LLMcapabilities is often mysterious, and different prompts can elicit differentcapabilities through in-context learning. We propose a framework that enablesus to analyze in-context learning dynamics to understand latent conceptsunderlying LLMs' behavioral patterns. This provides a more nuancedunderstanding than success-or-failure evaluation benchmarks, but does notrequire observing internal activations as a mechanistic interpretation ofcircuits would. Inspired by the cognitive science of human randomnessperception, we use random binary sequences as context and study dynamics ofin-context learning by manipulating properties of context data, such assequence length. In the latest GPT-3.5+ models, we find emergent abilities togenerate seemingly random numbers and learn basic formal languages, withstriking in-context learning dynamics where model outputs transition sharplyfrom seemingly random behaviors to deterministic repetition.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",, incontext learning with many demonstration examples,"['Mukai Li', 'Shansan Gong', 'Jiangtao Feng', 'Yiheng Xu', 'Jun Zhang', 'Zhiyong Wu', 'Lingpeng Kong']",http://arxiv.org/pdf/2302.04931v1.pdf,2023-02-09,," Large pre-training language models (PLMs) have shown promising in-contextlearning abilities. However, due to the backbone transformer architecture,existing PLMs are bottlenecked by the memory and computational cost whenscaling up to a large context size, leaving instruction tuning and in-contextlearning of many demonstration examples, as well as long-range languagemodeling under-explored. In this study, we propose a long-range language modelEVALM based on an efficient transformer mechanism. EVALM is trained with 8ktokens per batch line and can test up to 256k-lengthed contexts withextrapolation, 128 times to the limit of existing PLMs (e.g. GPT3). Based onEVALM, we scale up the size of examples efficiently in both instruction tuningand in-context learning to explore the boundary of the benefits from moreannotated data. Experimental results on a diverse set of tasks show that EVALMachieves 4.1% higher accuracy on average, and the average length of achievingthe best accuracy score over tasks is around 12k. We find that in-contextlearning can achieve higher performance with more demonstrations undermany-shot instruction tuning (8k), and further extending the length ofinstructions (16k) can further improve the upper bound of scaling in-contextlearning.",,arXiv,"['cs.cl', 'cs.ai']",, the learnability of incontext learning,"['Noam Wies', 'Yoav Levine', 'Amnon Shashua']",http://arxiv.org/pdf/2303.07895v1.pdf,2023-03-14,," In-context learning is a surprising and important phenomenon that emergedwhen modern language models were scaled to billions of learned parameters.Without modifying a large language model's weights, it can be tuned to performvarious downstream natural language tasks simply by including concatenatedtraining examples of these tasks in its input. Though disruptive for manypractical applications of large language models, this emergent learningparadigm is not well understood from a theoretical perspective. In this paper,we propose a first-of-its-kind PAC based framework for in-context learnability,and use it to provide the first finite sample complexity results for thein-context learning setup. Our framework includes an initial pretraining phase,which fits a function to the pretraining distribution, and then a secondin-context learning phase, which keeps this function constant and concatenatestraining examples of the downstream task in its input. We use our framework inorder to prove that, under mild assumptions, when the pretraining distributionis a mixture of latent tasks (a model often considered for natural languagepretraining), these tasks can be efficiently learned via in-context learning,even though the model's weights are unchanged and the input significantlydiverges from the pretraining distribution. Our theoretical analysis revealsthat in this setting, in-context learning is more about identifying the taskthan about learning it, a result which is in line with a series of recentempirical findings. We hope that the in-context learnability frameworkpresented in this paper will facilitate future progress towards a deeperunderstanding of this important new learning paradigm.",,arXiv,['cs.cl'],, sinc selfsupervised incontext learning for visionlanguage tasks,"['Yi-Syuan Chen', 'Yun-Zhu Song', 'Cheng Yu Yeo', 'Bei Liu', 'Jianlong Fu', 'Hong-Han Shuai']",http://arxiv.org/pdf/2307.07742v2.pdf,2023-07-15,," Large Pre-trained Transformers exhibit an intriguing capacity for in-contextlearning. Without gradient updates, these models can rapidly construct newpredictors from demonstrations presented in the inputs. Recent works promotethis ability in the vision-language domain by incorporating visual informationinto large language models that can already make in-context predictions.However, these methods could inherit issues in the language domain, such astemplate sensitivity and hallucination. Also, the scale of these languagemodels raises a significant demand for computations, making learning andoperating these models resource-intensive. To this end, we raise a question:``How can we enable in-context learning without relying on the intrinsicin-context ability of large language models?"". To answer it, we propose asuccinct and general framework, Self-supervised IN-Context learning (SINC),that introduces a meta-model to learn on self-supervised prompts consisting oftailored demonstrations. The learned models can be transferred to downstreamtasks for making in-context predictions on-the-fly. Extensive experiments showthat SINC outperforms gradient-based methods in various vision-language tasksunder few-shot settings. Furthermore, the designs of SINC help us investigatethe benefits of in-context learning across different tasks, and the analysisfurther reveals the essential components for the emergence of in-contextlearning in the vision-language domain.",,arXiv,"['cs.cv', 'cs.ai']",, selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator,"['Hyuhng Joon Kim', 'Hyunsoo Cho', 'Junyeob Kim', 'Taeuk Kim', 'Kang Min Yoo', 'Sang-goo Lee']",http://arxiv.org/pdf/2206.08082v1.pdf,2022-06-16,," Large-scale pre-trained language models (PLMs) are well-known for beingcapable of solving a task simply by conditioning a few input-label pairs dubbeddemonstrations on a prompt without being explicitly tuned for the desireddownstream task. Such a process (i.e., in-context learning), however, naturallyleads to high reliance on the demonstrations which are usually selected fromexternal datasets. In this paper, we propose self-generated in-context learning(SG-ICL), which generates demonstrations for in-context learning from PLMitself to minimize the reliance on the external demonstration. We conductexperiments on four different text classification tasks and show SG-ICLsignificantly outperforms zero-shot learning and is generally worthapproximately 0.6 gold training samples. Moreover, our generated demonstrationsshow more consistent performance with low variance compared to randomlyselected demonstrations from the training dataset.",,arXiv,['cs.cl'],, active example selection for incontext learning,"['Yiming Zhang', 'Shi Feng', 'Chenhao Tan']",http://arxiv.org/pdf/2211.04486v1.pdf,2022-11-08,," With a handful of demonstration examples, large-scale language models showstrong capability to perform various tasks by in-context learning from theseexamples, without any fine-tuning. We demonstrate that in-context learningperformance can be highly unstable across samples of examples, indicating theidiosyncrasies of how language models acquire information. We formulate exampleselection for in-context learning as a sequential decision problem, and proposea reinforcement learning algorithm for identifying generalizable policies toselect demonstration examples. For GPT-2, our learned policies demonstratestrong abilities of generalizing to unseen tasks in training, with a $5.8\%$improvement on average. Examples selected from our learned policies can evenachieve a small improvement on GPT-3 Ada. However, the improvement diminisheson larger GPT-3 models, suggesting emerging capabilities of large languagemodels.",,arXiv,"['cs.cl', 'cs.ai']",, bayesian optimization of catalysts with incontext learning,"['Mayk Caldas Ramos', 'Shane S. Michtavy', 'Marc D. Porosoff', 'Andrew D. White']",http://arxiv.org/pdf/2304.05341v1.pdf,2023-04-11,," Large language models (LLMs) are able to do accurate classification with zeroor only a few examples (in-context learning). We show a prompting system thatenables regression with uncertainty for in-context learning with frozen LLM(GPT-3, GPT-3.5, and GPT-4) models, allowing predictions without features orarchitecture tuning. By incorporating uncertainty, our approach enablesBayesian optimization for catalyst or molecule optimization using naturallanguage, eliminating the need for training or simulation. Here, we performedthe optimization using the synthesis procedure of catalysts to predictproperties. Working with natural language mitigates difficulty synthesizabilitysince the literal synthesis procedure is the model's input. We showed thatin-context learning could improve past a model context window (maximum numberof tokens the model can process at once) as data is gathered via exampleselection, allowing the model to scale better. Although our method does notoutperform all baselines, it requires zero training, feature selection, andminimal computing while maintaining satisfactory performance. We also findGaussian Process Regression on text embeddings is strong at Bayesianoptimization. The code is available in our GitHub repository:https://github.com/ur-whitelab/BO-LIFT",,arXiv,"['physics.chem-ph', 'cs.lg']",, incontext learning unlocked for diffusion models,"['Zhendong Wang', 'Yifan Jiang', 'Yadong Lu', 'Yelong Shen', 'Pengcheng He', 'Weizhu Chen', 'Zhangyang Wang', 'Mingyuan Zhou']",http://arxiv.org/pdf/2305.01115v2.pdf,2023-05-01,," We present Prompt Diffusion, a framework for enabling in-context learning indiffusion-based generative models. Given a pair of task-specific exampleimages, such as depth from/to image and scribble from/to image, and a textguidance, our model automatically understands the underlying task and performsthe same task on a new query image following the text guidance. To achievethis, we propose a vision-language prompt that can model a wide range ofvision-language tasks and a diffusion model that takes it as input. Thediffusion model is trained jointly over six different tasks using theseprompts. The resulting Prompt Diffusion model is the first diffusion-basedvision-language foundation model capable of in-context learning. Itdemonstrates high-quality in-context generation on the trained tasks andgeneralizes effectively to new, unseen vision tasks with their respectiveprompts. Our model also shows compelling text-guided image editing results. Ourframework aims to facilitate research into in-context learning for computervision. We share our code and pre-trained models athttps://github.com/Zhendong-Wang/Prompt-Diffusion.",,arXiv,['cs.cv'],, large language models can be lazy learners analyze shortcuts in incontext learning,"['Ruixiang Tang', 'Dehan Kong', 'Longtao Huang', 'Hui Xue']",http://arxiv.org/pdf/2305.17256v2.pdf,2023-05-26,," Large language models (LLMs) have recently shown great potential forin-context learning, where LLMs learn a new task simply by conditioning on afew input-label pairs (prompts). Despite their potential, our understanding ofthe factors influencing end-task performance and the robustness of in-contextlearning remains limited. This paper aims to bridge this knowledge gap byinvestigating the reliance of LLMs on shortcuts or spurious correlations withinprompts. Through comprehensive experiments on classification and extractiontasks, we reveal that LLMs are ""lazy learners"" that tend to exploit shortcutsin prompts for downstream tasks. Additionally, we uncover a surprising findingthat larger models are more likely to utilize shortcuts in prompts duringinference. Our findings provide a new perspective on evaluating robustness inin-context learning and pose new challenges for detecting and mitigating theuse of shortcuts in prompts.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, multidimensional evaluation of text summarization with incontext learning,"['Sameer Jain', 'Vaishakh Keshava', 'Swarnashree Mysore Sathyendra', 'Patrick Fernandes', 'Pengfei Liu', 'Graham Neubig', 'Chunting Zhou']",http://arxiv.org/pdf/2306.01200v1.pdf,2023-06-01,," Evaluation of natural language generation (NLG) is complex andmulti-dimensional. Generated text can be evaluated for fluency, coherence,factuality, or any other dimensions of interest. Most frameworks that performsuch multi-dimensional evaluation require training on large manually orsynthetically generated datasets. In this paper, we study the efficacy of largelanguage models as multi-dimensional evaluators using in-context learning,obviating the need for large training datasets. Our experiments show thatin-context learning-based evaluators are competitive with learned evaluationframeworks for the task of text summarization, establishing state-of-the-art ondimensions such as relevance and factual consistency. We then analyze theeffects of factors such as the selection and number of in-context examples onperformance. Finally, we study the efficacy of in-context learning basedevaluators in evaluating zero-shot summaries written by large language modelssuch as GPT-3.",,arXiv,['cs.cl'],, exploring the integration of large language models into automatic speech recognition systems an empirical study,"['Zeping Min', 'Jinbo Wang']",http://arxiv.org/pdf/2307.06530v1.pdf,2023-07-13,," This paper explores the integration of Large Language Models (LLMs) intoAutomatic Speech Recognition (ASR) systems to improve transcription accuracy.The increasing sophistication of LLMs, with their in-context learningcapabilities and instruction-following behavior, has drawn significantattention in the field of Natural Language Processing (NLP). Our primary focusis to investigate the potential of using an LLM's in-context learningcapabilities to enhance the performance of ASR systems, which currently facechallenges such as ambient noise, speaker accents, and complex linguisticcontexts. We designed a study using the Aishell-1 and LibriSpeech datasets,with ChatGPT and GPT-4 serving as benchmarks for LLM capabilities.Unfortunately, our initial experiments did not yield promising results,indicating the complexity of leveraging LLM's in-context learning for ASRapplications. Despite further exploration with varied settings and models, thecorrected sentences from the LLMs frequently resulted in higher Word ErrorRates (WER), demonstrating the limitations of LLMs in speech applications. Thispaper provides a detailed overview of these experiments, their results, andimplications, establishing that using LLMs' in-context learning capabilities tocorrect potential errors in speech recognition transcriptions is still achallenging task at the current stage.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, actsql incontext learning for texttosql with automaticallygenerated chainofthought,"['Hanchong Zhang', 'Ruisheng Cao', 'Lu Chen', 'Hongshen Xu', 'Kai Yu']",http://arxiv.org/pdf/2310.17342v1.pdf,2023-10-26,," Recently Large Language Models (LLMs) have been proven to have strongabilities in various domains and tasks. We study the problem of promptdesigning in the text-to-SQL task and attempt to improve the LLMs' reasoningability when generating SQL queries. Besides the trivial few-shot in-contextlearning setting, we design our chain-of-thought (CoT) prompt with a similarmethod to schema linking. We provide a method named ACT-SQL to automaticallygenerate auto-CoT exemplars and thus the whole process doesn't need manuallabeling. Our approach is cost-saving since we only use the LLMs' API call oncewhen generating one SQL query. Furthermore, we extend our in-context learningmethod to the multi-turn text-to-SQL task. The experiment results show that theLLMs' performance can benefit from our ACT-SQL approach. Our approach achievesSOTA performance on the Spider dev set among existing in-context learningapproaches.",,arXiv,['cs.cl'],, cosmic data efficient instructiontuning for speech incontext learning,"['Jing Pan', 'Jian Wu', 'Yashesh Gaur', 'Sunit Sivasankaran', 'Zhuo Chen', 'Shujie Liu', 'Jinyu Li']",http://arxiv.org/pdf/2311.02248v1.pdf,2023-11-03,," We present a data and cost efficient way of incorporating the speech modalityinto a large language model (LLM). The resulting multi-modal LLM is aCOntextual Speech Model with Instruction-following/in-context-learningCapabilities - COSMIC. Speech comprehension test question-answer (SQA) pairsare generated using GPT-3.5 based on the speech transcriptions as a part of thesupervision for the instruction tuning. With fewer than 20M trainableparameters and as little as 450 hours of English speech data for SQAgeneration, COSMIC exhibits emergent instruction-following and in-contextlearning capabilities in speech-to-text tasks. The model is able to follow thegiven text instructions to generate text response even on the unseen EN$\to$Xspeech-to-text translation (S2TT) task with zero-shot setting. We evaluate themodel's in-context learning via various tasks such as EN$\to$X S2TT andfew-shot domain adaptation. And instruction-following capabilities areevaluated through a contextual biasing benchmark. Our results demonstrate theefficacy of the proposed low cost recipe for building a speech LLM and thatwith the new instruction-tuning data.",,arXiv,"['cs.cl', 'cs.ai', 'eess.as']",, thinking about gpt3 incontext learning for biomedical ie think again,"['Bernal Jiménez Gutiérrez', 'Nikolas McNeal', 'Clay Washington', 'You Chen', 'Lang Li', 'Huan Sun', 'Yu Su']",http://arxiv.org/pdf/2203.08410v3.pdf,2022-03-16,," The strong few-shot in-context learning capability of large pre-trainedlanguage models (PLMs) such as GPT-3 is highly appealing for applicationdomains such as biomedicine, which feature high and diverse demands of languagetechnologies but also high data annotation costs. In this paper, we present thefirst systematic and comprehensive study to compare the few-shot performance ofGPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs ontwo highly representative biomedical information extraction tasks, named entityrecognition and relation extraction. We follow the true few-shot setting toavoid overestimating models' few-shot performance by model selection over alarge validation set. We also optimize GPT-3's performance with knowntechniques such as contextual calibration and dynamic in-context exampleretrieval. However, our results show that GPT-3 still significantlyunderperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3in-context learning also yields smaller gains in accuracy when more trainingdata becomes available. Our in-depth analyses further reveal issues of thein-context learning setting that may be detrimental to information extractiontasks in general. Given the high cost of experimenting with GPT-3, we hope ourstudy provides guidance for biomedical researchers and practitioners towardsmore promising directions such as fine-tuning small PLMs.",,arXiv,"['cs.cl', 'cs.ir']",, exploring effective factors for improving visual incontext learning,"['Yanpeng Sun', 'Qiang Chen', 'Jian Wang', 'Jingdong Wang', 'Zechao Li']",http://arxiv.org/pdf/2304.04748v1.pdf,2023-04-10,," The In-Context Learning (ICL) is to understand a new task via a fewdemonstrations (aka. prompt) and predict new inputs without tuning the models.While it has been widely studied in NLP, it is still a relatively new area ofresearch in computer vision. To reveal the factors influencing the performanceof visual in-context learning, this paper shows that prompt selection andprompt fusion are two major factors that have a direct impact on the inferenceperformance of visual context learning. Prompt selection is the process ofidentifying the most appropriate prompt or example to help the model understandnew tasks. This is important because providing the model with relevant promptscan help it learn more effectively and efficiently. Prompt fusion involvescombining knowledge from different positions within the large-scale visualmodel. By doing this, the model can leverage the diverse knowledge stored indifferent parts of the model to improve its performance on new tasks. Basedthese findings, we propose a simple framework prompt-SelF for visual in-contextlearning. Specifically, we first use the pixel-level retrieval method to selecta suitable prompt, and then use different prompt fusion methods to activate allthe knowledge stored in the large-scale model, and finally ensemble theprediction results obtained from different prompt fusion methods to obtain thefinal prediction results. And we conduct extensive experiments on single-objectsegmentation and detection tasks to demonstrate the effectiveness ofprompt-SelF. Remarkably, the prompt-SelF has outperformed OSLSM basedmeta-learning in 1-shot segmentation for the first time. This indicated thegreat potential of visual in-context learning. The source code and models willbe available at \url{https://github.com/syp2ysy/prompt-SelF}.",,arXiv,['cs.cv'],, dissecting chainofthought compositionality through incontext filtering and learning,"['Yingcong Li', 'Kartik Sreenivasan', 'Angeliki Giannou', 'Dimitris Papailiopoulos', 'Samet Oymak']",http://arxiv.org/pdf/2305.18869v2.pdf,2023-05-30,," Chain-of-thought (CoT) is a method that enables language models to handlecomplex reasoning tasks by decomposing them into simpler steps. Despite itssuccess, the underlying mechanics of CoT are not yet fully understood. In anattempt to shed light on this, our study investigates the impact of CoT on theability of transformers to in-context learn a simple to study, yet generalfamily of compositional functions: multi-layer perceptrons (MLPs). In thissetting, we find that the success of CoT can be attributed to breaking downin-context learning of a compositional function into two distinct phases:focusing on and filtering data related to each step of the composition andin-context learning the single-step composition function. Through bothexperimental and theoretical evidence, we demonstrate how CoT significantlyreduces the sample complexity of in-context learning (ICL) and facilitates thelearning of complex functions that non-CoT methods struggle with. Furthermore,we illustrate how transformers can transition from vanilla in-context learningto mastering a compositional function with CoT by simply incorporatingadditional layers that perform the necessary data-filtering for CoT via theattention mechanism. In addition to these test-time benefits, we show CoT helpsaccelerate pretraining by learning shortcuts to represent complex functions andfiltering plays an important role in this process. These findings collectivelyprovide insights into the mechanics of CoT, inviting further investigation ofits role in complex reasoning tasks.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, incontext learning through the bayesian prism,"['Kabir Ahuja', 'Madhur Panwar', 'Navin Goyal']",http://arxiv.org/pdf/2306.04891v1.pdf,2023-06-08,," In-context learning is one of the surprising and useful features of largelanguage models. How it works is an active area of research. Recently, stylizedmeta-learning-like setups have been devised that train these models on asequence of input-output pairs $(x, f(x))$ from a function class using thelanguage modeling loss and observe generalization to unseen functions from thesame class. One of the main discoveries in this line of research has been thatfor several problems such as linear regression, trained transformers learnalgorithms for learning functions in context. However, the inductive biases ofthese models resulting in this behavior are not clearly understood. A modelwith unlimited training data and compute is a Bayesian predictor: it learns thepretraining distribution. It has been shown that high-capacity transformersmimic the Bayesian predictor for linear regression. In this paper, we showempirical evidence of transformers exhibiting the behavior of this ideallearner across different linear and non-linear function classes. We also extendthe previous setups to work in the multitask setting and verify thattransformers can do in-context learning in this setup as well and the Bayesianperspective sheds light on this setting also. Finally, via the example oflearning Fourier series, we study the inductive bias for in-context learning.We find that in-context learning may or may not have simplicity bias dependingon the pretraining data distribution.",,arXiv,"['cs.lg', 'cs.cl']",, explore incontext learning for 3d point cloud understanding,"['Zhongbin Fang', 'Xiangtai Li', 'Xia Li', 'Joachim M. Buhmann', 'Chen Change Loy', 'Mengyuan Liu']",http://arxiv.org/pdf/2306.08659v2.pdf,2023-06-14,," With the rise of large-scale models trained on broad data, in-contextlearning has become a new learning paradigm that has demonstrated significantpotential in natural language processing and computer vision tasks. Meanwhile,in-context learning is still largely unexplored in the 3D point cloud domain.Although masked modeling has been successfully applied for in-context learningin 2D vision, directly extending it to 3D point clouds remains a formidablechallenge. In the case of point clouds, the tokens themselves are the pointcloud positions (coordinates) that are masked during inference. Moreover,position embedding in previous works may inadvertently introduce informationleakage. To address these challenges, we introduce a novel framework, namedPoint-In-Context, designed especially for in-context learning in 3D pointclouds, where both inputs and outputs are modeled as coordinates for each task.Additionally, we propose the Joint Sampling module, carefully designed to workin tandem with the general point sampling operator, effectively resolving theaforementioned technical issues. We conduct extensive experiments to validatethe versatility and adaptability of our proposed methods in handling a widerange of tasks.",,arXiv,['cs.cv'],, dqlore dual queries with low rank approximation reranking for incontext learning,"['Jing Xiong', 'Zixuan Li', 'Chuanyang Zheng', 'Zhijiang Guo', 'Yichun Yin', 'Enze Xie', 'Zhicheng Yang', 'Qingxing Cao', 'Haiming Wang', 'Xiongwei Han', 'Jing Tang', 'Chengming Li', 'Xiaodan Liang']",http://arxiv.org/pdf/2310.02954v4.pdf,2023-10-04,," Recent advances in natural language processing, primarily propelled by LargeLanguage Models (LLMs), have showcased their remarkable capabilities groundedin in-context learning. A promising avenue for guiding LLMs in intricatereasoning tasks involves the utilization of intermediate reasoning steps withinthe Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge liesin the effective selection of exemplars for facilitating in-context learning.In this study, we introduce a framework that leverages Dual Queries andLow-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplarsfor in-context learning. Dual Queries first query LLM to obtain LLM-generatedknowledge such as CoT, then query the retriever to obtain the final exemplarsvia both question and the knowledge. Moreover, for the second query, LoReemploys dimensionality reduction techniques to refine exemplar selection,ensuring close alignment with the input question's knowledge. Through extensiveexperiments, we demonstrate that DQ-LoRe significantly outperforms priorstate-of-the-art methods in the automatic selection of exemplars for GPT-4,enhancing performance from 92.5% to 94.2%. Our comprehensive analysis furtherreveals that DQ-LoRe consistently outperforms retrieval-based approaches interms of both performance and adaptability, especially in scenarioscharacterized by distribution shifts. DQ-LoRe pushes the boundaries ofin-context learning and opens up new avenues for addressing complex reasoningchallenges. We will release the code soon.",,arXiv,['cs.cl'],, compositional exemplars for incontext learning,"['Jiacheng Ye', 'Zhiyong Wu', 'Jiangtao Feng', 'Tao Yu', 'Lingpeng Kong']",http://arxiv.org/pdf/2302.05698v3.pdf,2023-02-11,," Large pretrained language models (LMs) have shown impressive In-ContextLearning (ICL) ability, where the model learns to do an unseen task via aprompt consisting of input-output examples as the demonstration, without anyparameter updates. The performance of ICL is highly dominated by the quality ofthe selected in-context examples. However, previous selection methods aremostly based on simple heuristics, leading to sub-optimal performance. In thiswork, we formulate in-context example selection as a subset selection problem.We propose CEIL (Compositional Exemplars for In-context Learning), which isinstantiated by Determinantal Point Processes (DPPs) to model the interactionbetween the given input and in-context examples, and optimized through acarefully-designed contrastive learning objective to obtain preference fromLMs. We validate CEIL on 12 classification and generation datasets from 7distinct NLP tasks, including sentiment analysis, paraphrase detection, naturallanguage inference, commonsense reasoning, open-domain question answering, codegeneration, and semantic parsing. Extensive experiments demonstrate not onlythe state-of-the-art performance but also the transferability andcompositionality of CEIL, shedding new light on effective and efficientin-context learning. Our code is released athttps://github.com/HKUNLP/icl-ceil.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, icld3ie incontext learning with diverse demonstrations updating for document information extraction,"['Jiabang He', 'Lei Wang', 'Yi Hu', 'Ning Liu', 'Hui Liu', 'Xing Xu', 'Heng Tao Shen']",http://arxiv.org/pdf/2303.05063v4.pdf,2023-03-09,," Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstratedremarkable results in various natural language processing (NLP) tasks within-context learning, which involves inference based on a few demonstrationexamples. Despite their successes in NLP tasks, no investigation has beenconducted to assess the ability of LLMs to perform document informationextraction (DIE) using in-context learning. Applying LLMs to DIE poses twochallenges: the modality and task gap. To this end, we propose a simple buteffective in-context learning framework called ICL-D3IE, which enables LLMs toperform DIE with different types of demonstration examples. Specifically, weextract the most difficult and distinct segments from hard training documentsas hard demonstrations for benefiting all test instances. We designdemonstrations describing relationships that enable LLMs to understandpositional relationships. We introduce formatting demonstrations for easyanswer extraction. Additionally, the framework improves diverse demonstrationsby updating them iteratively. Our experiments on three widely used benchmarkdatasets demonstrate that the ICL-D3IE framework enables Davinci-003/ChatGPT toachieve superior performance when compared to previous pre-trained methodsfine-tuned with full training in both the in-distribution (ID) setting and inthe out-of-distribution (OOD) setting. Code is available athttps://github.com/MAEHCM/ICL-D3IE.",,arXiv,['cs.cl'],, learning to retrieve prompts for incontext learning,"['Ohad Rubin', 'Jonathan Herzig', 'Jonathan Berant']",http://arxiv.org/pdf/2112.08633v2.pdf,2021-12-16,," In-context learning is a recent paradigm in natural language understanding,where a large pre-trained language model (LM) observes a test instance and afew training examples as its input, and directly decodes the output without anyupdate to its parameters. However, performance has been shown to stronglydepend on the selected training examples (termed prompt). In this work, wepropose an efficient method for retrieving prompts for in-context learningusing annotated data and a LM. Given an input-output pair, we estimate theprobability of the output given the input and a candidate training example asthe prompt, and label training examples as positive or negative based on thisprobability. We then train an efficient dense retriever from this data, whichis used to retrieve training examples as prompts at test time. We evaluate ourapproach on three sequence-to-sequence tasks where language utterances aremapped to meaning representations, and find that it substantially outperformsprior work and multiple baselines across the board.",,arXiv,"['cs.cl', 'cs.lg']",, semanticoriented unlabeled priming for largescale language models,"['Yanchen Liu', 'Timo Schick', 'Hinrich Schütze']",http://arxiv.org/pdf/2202.06133v1.pdf,2022-02-12,," Due to the high costs associated with finetuning large language models,various recent works propose to adapt them to specific tasks without anyparameter updates through in-context learning. Unfortunately, for in-contextlearning there is currently no way to leverage unlabeled data, which is oftenmuch easier to obtain in large quantities than labeled examples. In this work,we therefore investigate ways to make use of unlabeled examples to improve thezero-shot performance of pretrained language models without any finetuning: Weintroduce Semantic-Oriented Unlabeled Priming (SOUP), a method that classifiesexamples by retrieving semantically similar unlabeled examples, assigninglabels to them in a zero-shot fashion, and then using them for in-contextlearning. We also propose bag-of-contexts priming, a new priming strategy thatis more suitable for our setting and enables the usage of more examples thanfit into the context window.",,arXiv,['cs.cl'],, diverse demonstrations improve incontext compositional generalization,"['Itay Levy', 'Ben Bogin', 'Jonathan Berant']",http://arxiv.org/pdf/2212.06800v3.pdf,2022-12-13,," In-context learning has shown great success in i.i.d semantic parsing splits,where the training and test sets are drawn from the same distribution. In thissetup, models are typically prompted with demonstrations that are similar tothe input utterance. However, in the setup of compositional generalization,where models are tested on outputs with structures that are absent from thetraining set, selecting similar demonstrations is insufficient, as often noexample will be similar enough to the input. In this work, we propose a methodto select diverse demonstrations that aims to collectively cover all of thestructures required in the output program, in order to encourage the model togeneralize to new structures from these demonstrations. We empirically showthat combining diverse demonstrations with in-context learning substantiallyimproves performance across three compositional generalization semantic parsingdatasets in the pure in-context learning setup and when combined withfinetuning.",,arXiv,['cs.cl'],, the impact of symbolic representations on incontext learning for fewshot reasoning,"['Hanlin Zhang', 'Yi-Fan Zhang', 'Li Erran Li', 'Eric Xing']",http://arxiv.org/pdf/2212.08686v1.pdf,2022-12-16,," Pre-trained language models (LMs) have shown remarkable reasoning performanceusing explanations (or ``chain-of-thought'' (CoT)) for in-context learning. Onthe other hand, these reasoning tasks are usually presumed to be moreapproachable for symbolic programming. To make progress towards understandingin-context learning, we curate synthetic datasets containing equivalent(natural, symbolic) data pairs, where symbolic examples contain first-orderlogic rules and predicates from knowledge bases (KBs). Then we revisitneuro-symbolic approaches and use Language Models as Logic Programmer (LMLP)that learns from demonstrations containing logic rules and correspondingexamples to iteratively reason over KBs, recovering Prolog's backward chainingalgorithm. Comprehensive experiments are included to systematically compareLMLP with CoT in deductive reasoning settings, showing that LMLP enjoys morethan 25% higher accuracy than CoT on length generalization benchmarks even withfewer parameters.",,arXiv,['cs.cl'],, selfadaptive incontext learning an information compression perspective for incontext example selection and ordering,"['Zhiyong Wu', 'Yaoxiang Wang', 'Jiacheng Ye', 'Lingpeng Kong']",http://arxiv.org/pdf/2212.10375v2.pdf,2022-12-20,," Despite the surprising few-shot performance of in-context learning (ICL), itis still a common practice to randomly sample examples to serve as context.This paper advocates a new principle for ICL: self-adaptive in-contextlearning. The self-adaption mechanism is introduced to help each sample find anin-context example permutation (i.e., selection and ordering) that can derivethe correct prediction, thus maximizing performance. To validate theeffectiveness of self-adaptive ICL, we propose a general select-then-rankframework and instantiate it with new selection and ranking algorithms. Uponextensive evaluation on eight different NLP datasets, our self-adaptive ICLmethod achieves a 40% relative improvement over the common practice setting.Further analysis reveals the enormous potential of self-adaptive ICL that itmight be able to close the gap between ICL and finetuning given more advancedalgorithms. Our code is released to facilitate future research in this area:https://github.com/Shark-NLP/self-adaptive-ICL",,arXiv,"['cs.cl', 'cs.ai']",, privacypreserving incontext learning for large language models,"['Tong Wu', 'Ashwinee Panda', 'Jiachen T. Wang', 'Prateek Mittal']",http://arxiv.org/pdf/2305.01639v2.pdf,2023-05-02,," In-context learning (ICL) is an important capability of Large Language Models(LLMs), enabling these models to dynamically adapt based on specific,in-context exemplars, thereby improving accuracy and relevance. However, LLM'sresponses may leak the sensitive private information contained in in-contextexemplars. To address this challenge, we propose Differentially PrivateIn-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. Thekey idea for DP-ICL paradigm is generating differentially private responsesthrough a noisy consensus among an ensemble of LLM's responses based ondisjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiateseveral techniques showing how to privatize ICL for text classification andlanguage generation. We evaluate DP-ICL on four text classification benchmarksand two language generation tasks, and our empirical results show that DP-ICLachieves a strong utility-privacy tradeoff.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cr']",, incontext learning as maintaining coherency a study of onthefly machine translation using large language models,"['Suzanna Sia', 'Kevin Duh']",http://arxiv.org/pdf/2305.03573v1.pdf,2023-05-05,," The phenomena of in-context learning has typically been thought of as""learning from examples"". In this work which focuses on Machine Translation, wepresent a perspective of in-context learning as the desired generation taskmaintaining coherency with its context, i.e., the prompt examples. We firstinvestigate randomly sampled prompts across 4 domains, and find thattranslation performance improves when shown in-domain prompts. Next, weinvestigate coherency for the in-domain setting, which uses prompt examplesfrom a moving window. We study this with respect to other factors that havepreviously been identified in the literature such as length, surface similarityand sentence embedding similarity. Our results across 3 models (GPTNeo2.7B,Bloom3B, XGLM2.9B), and three translation directions(\texttt{en}$\rightarrow$\{\texttt{pt, de, fr}\}) suggest that the long-termcoherency of the prompts and the test sentence is a good indicator ofdownstream translation performance. In doing so, we demonstrate the efficacy ofIn-context Machine Translation for on-the-fly adaptation.",,arXiv,"['cs.cl', 'cs.ai']",, small models are valuable plugins for large language models,"['Canwen Xu', 'Yichong Xu', 'Shuohang Wang', 'Yang Liu', 'Chenguang Zhu', 'Julian McAuley']",http://arxiv.org/pdf/2305.08848v1.pdf,2023-05-15,," Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but theirweights are often publicly unavailable and their immense sizes make the modelsdifficult to be tuned with common hardware. As a result, effectively tuningthese models with large-scale supervised data can be challenging. As analternative, In-Context Learning (ICL) can only use a small number ofsupervised examples due to context length limits. In this paper, we proposeSuper In-Context Learning (SuperICL) which allows black-box LLMs to work withlocally fine-tuned smaller models, resulting in superior performance onsupervised tasks. Our experiments demonstrate that SuperICL can improveperformance beyond state-of-the-art fine-tuned models while addressing theinstability problem of in-context learning. Furthermore, SuperICL can enhancethe capabilities of smaller models, such as multilinguality andinterpretability.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, gptfinre incontext learning for financial relation extraction using large language models,"['Pawan Kumar Rajpoot', 'Ankur Parikh']",http://arxiv.org/pdf/2306.17519v2.pdf,2023-06-30,," Relation extraction (RE) is a crucial task in natural language processing(NLP) that aims to identify and classify relationships between entitiesmentioned in text. In the financial domain, relation extraction plays a vitalrole in extracting valuable information from financial documents, such as newsarticles, earnings reports, and company filings. This paper describes oursolution to relation extraction on one such dataset REFinD. The dataset wasreleased along with shared task as a part of the Fourth Workshop on KnowledgeDiscovery from Unstructured Data in Financial Services, co-located with SIGIR2023. In this paper, we employed OpenAI models under the framework ofin-context learning (ICL). We utilized two retrieval strategies to find top Krelevant in-context learning demonstrations / examples from training data for agiven test example. The first retrieval mechanism, we employed, is alearning-free dense retriever and the other system is a learning-basedretriever. We were able to achieve 3rd rank overall. Our best F1-score is0.718.",,arXiv,['cs.cl'],, codestyle incontext learning for knowledgebased question answering,"['Zhijie Nie', 'Richong Zhang', 'Zhongyuan Wang', 'Xudong Liu']",http://arxiv.org/pdf/2309.04695v2.pdf,2023-09-09,," Current methods for Knowledge-Based Question Answering (KBQA) usually rely oncomplex training techniques and model frameworks, leading to many limitationsin practical applications. Recently, the emergence of In-Context Learning (ICL)capabilities in Large Language Models (LLMs) provides a simple andtraining-free semantic parsing paradigm for KBQA: Given a small number ofquestions and their labeled logical forms as demo examples, LLMs can understandthe task intent and generate the logic form for a new question. However,current powerful LLMs have little exposure to logic forms during pre-training,resulting in a high format error rate. To solve this problem, we propose acode-style in-context learning method for KBQA, which converts the generationprocess of unfamiliar logical form into the more familiar code generationprocess for LLMs. Experimental results on three mainstream datasets show thatour method dramatically mitigated the formatting error problem in generatinglogic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under thefew-shot setting. The code and supplementary files are released athttps://github.com/Arthurizijar/KB-Coder .",,arXiv,"['cs.cl', 'cs.ai']",, iclef incontext learning with expert feedback for explainable style transfer,"['Arkadiy Saakyan', 'Smaranda Muresan']",http://arxiv.org/pdf/2309.08583v1.pdf,2023-09-15,," While state-of-the-art language models excel at the style transfer task,current work does not address explainability of style transfer systems.Explanations could be generated using large language models such as GPT-3.5 andGPT-4, but the use of such complex systems is inefficient when smaller, widelydistributed, and transparent alternatives are available. We propose a frameworkto augment and improve a formality style transfer dataset with explanations viamodel distillation from ChatGPT. To further refine the generated explanations,we propose a novel way to incorporate scarce expert human feedback usingin-context learning (ICLEF: In-Context Learning from Expert Feedback) byprompting ChatGPT to act as a critic to its own outputs. We use the resultingdataset of 9,960 explainable formality style transfer instances (e-GYAFC) toshow that current openly distributed instruction-tuned models (and, in somesettings, ChatGPT) perform poorly on the task, and that fine-tuning on ourhigh-quality dataset leads to significant improvements as shown by automaticevaluation. In human evaluation, we show that models much smaller than ChatGPTfine-tuned on our data align better with expert preferences. Finally, wediscuss two potential applications of models fine-tuned on the explainablestyle transfer task: interpretable authorship verification and interpretableadversarial attacks on AI-generated text detectors.",,arXiv,['cs.cl'],, utilising a large language model to annotate subject metadata a case study in an australian national research data catalogue,"['Shiwei Zhang', 'Mingfang Wu', 'Xiuzhen Zhang']",http://arxiv.org/pdf/2310.11318v1.pdf,2023-10-17,," In support of open and reproducible research, there has been a rapidlyincreasing number of datasets made available for research. As the availabilityof datasets increases, it becomes more important to have quality metadata fordiscovering and reusing them. Yet, it is a common issue that datasets oftenlack quality metadata due to limited resources for data curation. Meanwhile,technologies such as artificial intelligence and large language models (LLMs)are progressing rapidly. Recently, systems based on these technologies, such asChatGPT, have demonstrated promising capabilities for certain data curationtasks. This paper proposes to leverage LLMs for cost-effective annotation ofsubject metadata through the LLM-based in-context learning. Our method employsGPT-3.5 with prompts designed for annotating subject metadata, demonstratingpromising performance in automatic metadata annotation. However, models basedon in-context learning cannot acquire discipline-specific rules, resulting inlower performance in several categories. This limitation arises from thelimited contextual information available for subject inference. To the best ofour knowledge, we are introducing, for the first time, an in-context learningmethod that harnesses large language models for automated subject metadataannotation.",,arXiv,"['cs.cl', 'cs.ai']",, hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks,"['Yifan Wang', 'Qingyan Guo', 'Xinzhe Ni', 'Chufan Shi', 'Lemao Liu', 'Haiyun Jiang', 'Yujiu Yang']",http://arxiv.org/pdf/2311.01949v1.pdf,2023-11-03,," In-context learning (ICL) ability has emerged with the increasing scale oflarge language models (LLMs), enabling them to learn input-label mappings fromdemonstrations and perform well on downstream tasks. However, under thestandard ICL setting, LLMs may sometimes neglect query-related information indemonstrations, leading to incorrect predictions. To address this limitation,we propose a new paradigm called Hint-enhanced In-Context Learning (HICL) toexplore the power of ICL in open-domain question answering, an important formin knowledge-intensive tasks. HICL leverages LLMs' reasoning ability to extractquery-related knowledge from demonstrations, then concatenates the knowledge toprompt LLMs in a more explicit way. Furthermore, we track the source of thisknowledge to identify specific examples, and introduce a Hint-related ExampleRetriever (HER) to select informative examples for enhanced demonstrations. Weevaluate HICL with HER on 3 open-domain QA benchmarks, and observe averageperformance gains of 2.89 EM score and 2.52 F1 score on gpt-3.5-turbo, 7.62 EMscore and 7.27 F1 score on LLaMA-2-Chat-7B compared with standard setting.",,arXiv,['cs.cl'],, rethinking the role of demonstrations what makes incontext learning work,"['Sewon Min', 'Xinxi Lyu', 'Ari Holtzman', 'Mikel Artetxe', 'Mike Lewis', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2202.12837v2.pdf,2022-02-25,," Large language models (LMs) are able to in-context learn -- perform a newtask via inference alone by conditioning on a few input-label pairs(demonstrations) and making predictions for new inputs. However, there has beenlittle understanding of how the model learns and which aspects of thedemonstrations contribute to end task performance. In this paper, we show thatground truth demonstrations are in fact not required -- randomly replacinglabels in the demonstrations barely hurts performance on a range ofclassification and multi-choce tasks, consistently over 12 different modelsincluding GPT-3. Instead, we find that other aspects of the demonstrations arethe key drivers of end task performance, including the fact that they provide afew examples of (1) the label space, (2) the distribution of the input text,and (3) the overall format of the sequence. Together, our analysis provides anew way of understanding how and why in-context learning works, while openingup new questions about how much can be learned from large language modelsthrough inference alone.",,arXiv,"['cs.cl', 'cs.ai']",, fewshot anaphora resolution in scientific protocols via mixtures of incontext experts,"['Nghia T. Le', 'Fan Bai', 'Alan Ritter']",http://arxiv.org/pdf/2210.03690v2.pdf,2022-10-07,," Anaphora resolution is an important task for information extraction across arange of languages, text genres, and domains, motivating the need for methodsthat do not require large annotated datasets. In-context learning has emergedas a promising approach, yet there are a number of challenges in applyingin-context learning to resolve anaphora. For example, encoding a singlein-context demonstration that consists of: an anaphor, a paragraph-lengthcontext, and a list of corresponding antecedents, requires conditioning alanguage model on a long sequence of tokens, limiting the number ofdemonstrations per prompt. In this paper, we present MICE (Mixtures ofIn-Context Experts), which we demonstrate is effective for few-shot anaphoraresolution in scientific protocols (Tamari et al., 2021). Given only a handfulof training examples, MICE combines the predictions of hundreds of in-contextexperts, yielding a 30% increase in F1 score over a competitive promptretrieval baseline. Furthermore, we show MICE can be used to train compactstudent models without sacrificing performance. As far as we are aware, this isthe first work to present experimental results demonstrating the effectivenessof in-context learning on the task of few-shot anaphora resolution inscientific protocols.",,arXiv,"['cs.cl', 'cs.ai']",, adaptive machine translation with large language models,"['Yasmin Moslem', 'Rejwanul Haque', 'John D. Kelleher', 'Andy Way']",http://arxiv.org/pdf/2301.13294v3.pdf,2023-01-30,," Consistency is a key requirement of high-quality translation. It isespecially important to adhere to pre-approved terminology and adapt tocorrected translations in domain-specific projects. Machine translation (MT)has achieved significant progress in the area of domain adaptation. However,real-time adaptation remains challenging. Large-scale language models (LLMs)have recently shown interesting capabilities of in-context learning, where theylearn to replicate certain input-output text generation patterns, withoutfurther fine-tuning. By feeding an LLM at inference time with a prompt thatconsists of a list of translation pairs, it can then simulate the domain andstyle characteristics. This work aims to investigate how we can utilizein-context learning to improve real-time adaptive MT. Our extensive experimentsshow promising results at translation time. For example, LLMs can adapt to aset of in-domain sentence pairs and/or terminology while translating a newsentence. We observe that the translation quality with few-shot in-contextlearning can surpass that of strong encoder-decoder MT systems, especially forhigh-resource languages. Moreover, we investigate whether we can combine MTfrom strong encoder-decoder models with fuzzy matches, which can furtherimprove translation quality, especially for less supported languages. Weconduct our experiments across five diverse language pairs, namelyEnglish-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French(EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES).",,arXiv,['cs.cl'],, scattershot interactive incontext example curation for text transformation,"['Tongshuang Wu', 'Hua Shen', 'Daniel S. Weld', 'Jeffrey Heer', 'Marco Tulio Ribeiro']",http://arxiv.org/pdf/2302.07346v1.pdf,2023-02-14,," The in-context learning capabilities of LLMs like GPT-3 allow annotators tocustomize an LLM to their specific tasks with a small number of examples.However, users tend to include only the most obvious patterns when craftingexamples, resulting in underspecified in-context functions that fall short onunseen cases. Further, it is hard to know when ""enough"" examples have beenincluded even for known patterns. In this work, we present ScatterShot, aninteractive system for building high-quality demonstration sets for in-contextlearning. ScatterShot iteratively slices unlabeled data into task-specificpatterns, samples informative inputs from underexplored or not-yet-saturatedslices in an active learning manner, and helps users label more efficientlywith the help of an LLM and the current example set. In simulation studies ontwo text perturbation scenarios, ScatterShot sampling improves the resultingfew-shot functions by 4-5 percentage points over random sampling, with lessvariance as more examples are added. In a user study, ScatterShot greatly helpsusers in covering different patterns in the input space and labeling in-contextexamples more efficiently, resulting in better in-context learning and lessuser effort.",,arXiv,"['cs.hc', 'cs.cl']",, resources and fewshot learners for incontext learning in slavic languages,"['Michal Štefánik', 'Marek Kadlčík', 'Piotr Gramacki', 'Petr Sojka']",http://arxiv.org/pdf/2304.01922v1.pdf,2023-04-04,," Despite the rapid recent progress in creating accurate and compact in-contextlearners, most recent work focuses on in-context learning (ICL) for tasks inEnglish. However, the ability to interact with users of languages outsideEnglish presents a great potential for broadening the applicability of languagetechnologies to non-English speakers. In this work, we collect the infrastructure necessary for training andevaluation of ICL in a selection of Slavic languages: Czech, Polish, andRussian. We link a diverse set of datasets and cast these into a unifiedinstructional format through a set of transformations and newly-craftedtemplates written purely in target languages. Using the newly-curated dataset,we evaluate a set of the most recent in-context learners and compare theirresults to the supervised baselines. Finally, we train, evaluate and publish aset of in-context learning models that we train on the collected resources andcompare their performance to previous work. We find that ICL models tuned in English are also able to learn some tasksfrom non-English contexts, but multilingual instruction fine-tuningconsistently improves the ICL ability. We also find that the massive multitasktraining can be outperformed by single-task training in the target language,uncovering the potential for specializing in-context learners to thelanguage(s) of their application.",,arXiv,['cs.cl'],, unified demonstration retriever for incontext learning,"['Xiaonan Li', 'Kai Lv', 'Hang Yan', 'Tianyang Lin', 'Wei Zhu', 'Yuan Ni', 'Guotong Xie', 'Xiaoling Wang', 'Xipeng Qiu']",http://arxiv.org/pdf/2305.04320v2.pdf,2023-05-07,," In-context learning is a new learning paradigm where a language modelconditions on a few input-output pairs (demonstrations) and a test input, anddirectly outputs the prediction. It has been shown highly dependent on theprovided demonstrations and thus promotes the research of demonstrationretrieval: given a test input, relevant examples are retrieved from thetraining set to serve as informative demonstrations for in-context learning.While previous works focus on training task-specific retrievers for severaltasks separately, these methods are often hard to transfer and scale on varioustasks, and separately trained retrievers incur a lot of parameter storage anddeployment cost. In this paper, we propose Unified Demonstration Retriever(\textbf{UDR}), a single model to retrieve demonstrations for a wide range oftasks. To train UDR, we cast various tasks' training signals into a unifiedlist-wise ranking formulation by language model's feedback. Then we propose amulti-task list-wise ranking training framework, with an iterative miningstrategy to find high-quality candidates, which can help UDR fully incorporatevarious tasks' signals. Experiments on 30+ tasks across 13 task families andmultiple data domains show that UDR significantly outperforms baselines.Further analyses show the effectiveness of each proposed component and UDR'sstrong ability in various scenarios including different LMs (1.3B - 175B),unseen datasets, varying demonstration quantities, etc.",,arXiv,['cs.cl'],, efficient prompting via dynamic incontext learning,"['Wangchunshu Zhou', 'Yuchen Eleanor Jiang', 'Ryan Cotterell', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2305.11170v1.pdf,2023-05-18,," The primary way of building AI applications is shifting from trainingspecialist models to prompting generalist models. A common practice forprompting generalist models, often referred to as in-context learning, is toappend a few examples (demonstrations) to the prompt to help the model betterunderstand the task. While effective, in-context learning can be inefficientbecause it makes the input prompt much longer, consuming valuable space in thecontext window and leading to larger computational costs. In this paper, wepropose DynaICL, a recipe for efficient prompting with black-box generalistmodels that dynamically allocate in-context examples according to the inputcomplexity and the computational budget. To achieve this, we train a metacontroller that predicts the number of in-context examples suitable for thegeneralist model to make a good prediction based on the performance-efficiencytrade-off for a specific input. We then dynamically allocate the number ofdemonstrations for an input according to predictions from the meta controllerand the given computation budget. Experimental results show that dynamicexample allocation helps achieve a better performance-efficiency trade-off intwo practical settings where computational resources or the requiredperformance is constrained. Specifically, DynaICL saves up to 46% token budgetcompared to the common practice that allocates the same number of in-contextexamples to each input. We also find that a meta controller trained on acertain backbone model and tasks can successfully generalize to unseen modelsand tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, post hoc explanations of language models can improve language models,"['Satyapriya Krishna', 'Jiaqi Ma', 'Dylan Slack', 'Asma Ghandeharioun', 'Sameer Singh', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2305.11426v3.pdf,2023-05-19,," Large Language Models (LLMs) have demonstrated remarkable capabilities inperforming complex tasks. Moreover, recent research has shown thatincorporating human-annotated rationales (e.g., Chain-of-Thought prompting)during in-context learning can significantly enhance the performance of thesemodels, particularly on tasks that require reasoning capabilities. However,incorporating such rationales poses challenges in terms of scalability as thisrequires a high degree of human involvement. In this work, we present a novelframework, Amplifying Model Performance by Leveraging In-Context Learning withPost Hoc Explanations (AMPLIFY), which addresses the aforementioned challengesby automating the process of rationale generation. To this end, we leveragepost hoc explanation methods which output attribution scores (explanations)capturing the influence of each of the input features on model predictions.More specifically, we construct automated natural language rationales thatembed insights from post hoc explanations to provide corrective signals toLLMs. Extensive experimentation with real-world datasets demonstrates that ourframework, AMPLIFY, leads to prediction accuracy improvements of about 10-25%over a wide range of tasks, including those where prior approaches which relyon human-annotated rationales such as Chain-of-Thought prompting fall short.Our work makes one of the first attempts at highlighting the potential of posthoc explanations as valuable tools for enhancing the effectiveness of LLMs.Furthermore, we conduct additional empirical analyses and ablation studies todemonstrate the impact of each of the components of AMPLIFY, which, in turn,leads to critical insights for refining in-context learning.",,arXiv,"['cs.cl', 'cs.ai']",, reticl sequential retrieval of incontext examples with reinforcement learning,"['Alexander Scarlatos', 'Andrew Lan']",http://arxiv.org/pdf/2305.14502v1.pdf,2023-05-23,," Many recent developments in large language models focus on prompting them toperform specific tasks. One effective prompting method is in-context learning,where the model performs a (possibly new) generation/prediction task given one(or more) examples. Past work has shown that the choice of examples can make alarge impact on task performance. However, finding good examples is notstraightforward since the definition of a representative group of examples canvary greatly depending on the task. While there are many existing methods forselecting in-context examples, they generally score examples independently,ignoring the dependency between them and the order in which they are providedto the large language model. In this work, we propose Retrieval for In-ContextLearning (RetICL), a learnable method for modeling and optimally selectingexamples sequentially for in-context learning. We frame the problem ofsequential example selection as a Markov decision process, design an exampleretriever model using an LSTM, and train it using proximal policy optimization(PPO). We validate RetICL on math problem solving datasets and show that itoutperforms both heuristic and learnable baselines, and achievesstate-of-the-art accuracy on the TabMWP dataset. We also use case studies toshow that RetICL implicitly learns representations of math problem solvingstrategies.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, metricbased incontext learning a case study in text simplification,"['Subha Vadlamannati', 'Gözde Gül Şahin']",http://arxiv.org/pdf/2307.14632v1.pdf,2023-07-27,," In-context learning (ICL) for large language models has proven to be apowerful approach for many natural language processing tasks. However,determining the best method to select examples for ICL is nontrivial as theresults can vary greatly depending on the quality, quantity, and order ofexamples used. In this paper, we conduct a case study on text simplification(TS) to investigate how to select the best and most robust examples for ICL. Wepropose Metric-Based in-context Learning (MBL) method that utilizes commonlyused TS metrics such as SARI, compression ratio, and BERT-Precision forselection. Through an extensive set of experiments with various-sized GPTmodels on standard TS benchmarks such as TurkCorpus and ASSET, we show thatexamples selected by the top SARI scores perform the best on larger models suchas GPT-175B, while the compression ratio generally performs better on smallermodels such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL isgenerally robust to example orderings and out-of-domain test sets, andoutperforms strong baselines and state-of-the-art finetuned language models.Finally, we show that the behaviour of large GPT models can be implicitlycontrolled by the chosen metric. Our research provides a new framework forselecting examples in ICL, and demonstrates its effectiveness in textsimplification tasks, breaking new ground for more accurate and efficient NLGsystems.",,arXiv,"['cs.cl', 'cs.ai']",, hicl hashtagdriven incontext learning for social media natural language understanding,"['Hanzhuo Tan', 'Chunpu Xu', 'Jing Li', 'Yuqun Zhang', 'Zeyang Fang', 'Zeyu Chen', 'Baohua Lai']",http://arxiv.org/pdf/2308.09985v1.pdf,2023-08-19,," Natural language understanding (NLU) is integral to various social mediaapplications. However, existing NLU models rely heavily on context for semanticlearning, resulting in compromised performance when faced with short and noisysocial media content. To address this issue, we leverage in-context learning(ICL), wherein language models learn to make inferences by conditioning on ahandful of demonstrations to enrich the context and propose a novelhashtag-driven in-context learning (HICL) framework. Concretely, we pre-train amodel #Encoder, which employs #hashtags (user-annotated topic labels) to driveBERT-based pre-training through contrastive learning. Our objective here is toenable #Encoder to gain the ability to incorporate topic-related semanticinformation, which allows it to retrieve topic-related posts to enrich contextsand enhance social media NLU with noisy contexts. To further integrate theretrieved context with the source text, we employ a gradient-based method toidentify trigger terms useful in fusing information from both sources. Forempirical studies, we collected 45M tweets to set up an in-context NLUbenchmark, and the experimental results on seven downstream tasks show thatHICL substantially advances the previous state-of-the-art results. Furthermore,we conducted extensive analyzes and found that: (1) combining source input witha top-retrieved post from #Encoder is more effective than using semanticallysimilar posts; (2) trigger words can largely benefit in merging context fromthe source and retrieved posts.",,arXiv,['cs.cl'],, incontext convergence of transformers,"['Yu Huang', 'Yuan Cheng', 'Yingbin Liang']",http://arxiv.org/pdf/2310.05249v1.pdf,2023-10-08,," Transformers have recently revolutionized many domains in modern machinelearning and one salient discovery is their remarkable in-context learningcapability, where models can solve an unseen task by utilizing task-specificprompts without further parameters fine-tuning. This also inspired recenttheoretical studies aiming to understand the in-context learning mechanism oftransformers, which however focused only on linear transformers. In this work,we take the first step toward studying the learning dynamics of a one-layertransformer with softmax attention trained via gradient descent in order toin-context learn linear function classes. We consider a structured data model,where each token is randomly sampled from a set of feature vectors in eitherbalanced or imbalanced fashion. For data with balanced features, we establishthe finite-time convergence guarantee with near-zero prediction error bynavigating our analysis over two phases of the training dynamics of theattention map. More notably, for data with imbalanced features, we show thatthe learning dynamics take a stage-wise convergence process, where thetransformer first converges to a near-zero prediction error for the querytokens of dominant features, and then converges later to a near-zero predictionerror for the query tokens of under-represented features, respectively via oneand four training phases. Our proof features new techniques for analyzing thecompeting strengths of two types of attention weights, the change of whichdetermines different training phases.",,arXiv,"['cs.lg', 'cs.ai', 'math.oc', 'stat.ml']",, large language modelaware incontext learning for code generation,"['Jia Li', 'Ge Li', 'Chongyang Tao', 'Jia Li', 'Huangzhao Zhang', 'Fang Liu', 'Zhi Jin']",http://arxiv.org/pdf/2310.09748v1.pdf,2023-10-15,," Large language models (LLMs) have shown impressive in-context learning (ICL)ability in code generation. LLMs take a prompt consisting of requirement-codeexamples and a new requirement as input, and output new programs. Existingstudies have found that ICL is highly dominated by the examples and thus arisesresearch on example selection. However, existing approaches randomly selectexamples or only consider the textual similarity of requirements to retrieve,leading to sub-optimal performance. In this paper, we propose a novellearning-based selection approach named LAIL (LLM-Aware In-context Learning)for code generation. Given a candidate example, we exploit LLMs themselves toestimate it by considering the generation probabilities of ground-truthprograms given a requirement and the example. We then label candidate examplesas positive or negative through the probability feedback. Based on the labeleddata, we import a contrastive learning objective to train an effectiveretriever that acquires the preference of LLMs in code generation. We applyLAIL to three LLMs and evaluate it on three representative datasets (e.g.,MBJP, MBPP, and MBCPP). LATA outperforms the state-of-the-art baselines by11.58%, 6.89%, and 5.07% on CodeGen, and 4.38%, 2.85%, and 2.74% on GPT-3.5 interms of Pass@1, respectively.",,arXiv,"['cs.se', 'cs.cl']",, on the relation between sensitivity and accuracy in incontext learning,"['Yanda Chen', 'Chen Zhao', 'Zhou Yu', 'Kathleen McKeown', 'He He']",http://arxiv.org/pdf/2209.07661v3.pdf,2022-09-16,," In-context learning (ICL) suffers from oversensitivity to the prompt, makingit unreliable in real-world scenarios. We study the sensitivity of ICL withrespect to multiple perturbation types. First, we find that label bias obscuresthe true sensitivity, and therefore prior work may have significantlyunderestimated ICL sensitivity. Second, we observe a strong negativecorrelation between ICL sensitivity and accuracy: predictions sensitive toperturbations are less likely to be correct. Motivated by these findings, wepropose \textsc{SenSel}, a few-shot selective prediction method that abstainsfrom sensitive predictions. Experiments on ten classification datasets showthat \textsc{SenSel} consistently outperforms two commonly usedconfidence-based and entropy-based baselines on abstention decisions.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, winodict probing language models for incontext word acquisition,"['Julian Martin Eisenschlos', 'Jeremy R. Cole', 'Fangyu Liu', 'William W. Cohen']",http://arxiv.org/pdf/2209.12153v1.pdf,2022-09-25,," We introduce a new in-context learning paradigm to measure Large LanguageModels' (LLMs) ability to learn novel words during inference. In particular, werewrite Winograd-style co-reference resolution problems by replacing the keyconcept word with a synthetic but plausible word that the model must understandto complete the task. Solving this task requires the model to make use of thedictionary definition of the new word given in the prompt. This benchmarkaddresses word acquisition, one important aspect of the diachronic degradationknown to afflict LLMs. As LLMs are frozen in time at the moment they aretrained, they are normally unable to reflect the way language changes overtime. We show that the accuracy of LLMs compared to the original Winograd tasksdecreases radically in our benchmark, thus identifying a limitation of currentmodels and providing a benchmark to measure future improvements in LLMs abilityto do in-context learning.",,arXiv,"['cs.cl', 'cs.ai']",, data curation alone can stabilize incontext learning,"['Ting-Yun Chang', 'Robin Jia']",http://arxiv.org/pdf/2212.10378v2.pdf,2022-12-20,," In-context learning (ICL) enables large language models (LLMs) to perform newtasks by prompting them with a sequence of training examples. However, it isknown that ICL is very sensitive to the choice of training examples: randomlysampling examples from a training set leads to high variance in performance. Inthis paper, we show that carefully curating a subset of training data greatlystabilizes ICL performance without any other changes to the ICL algorithm(e.g., prompt retrieval or calibration). We introduce two methods to choosetraining subsets -- both score training examples individually, then select thehighest-scoring ones. CondAcc scores a training example by its average dev-setICL accuracy when combined with random training examples, while Datamodelslearns linear regressors that estimate how the presence of each trainingexample influences LLM outputs. Across five tasks and two LLMs, sampling fromstable subsets selected by CondAcc and Datamodels improves average accuracyover sampling from the entire training set by 7.7% and 6.3%, respectively.Surprisingly, the stable subset examples are not especially diverse in contentor low in perplexity, in contrast with other work suggesting that diversity andperplexity are important when prompting LLMs.",,arXiv,['cs.cl'],, a survey on incontext learning,"['Qingxiu Dong', 'Lei Li', 'Damai Dai', 'Ce Zheng', 'Zhiyong Wu', 'Baobao Chang', 'Xu Sun', 'Jingjing Xu', 'Lei Li', 'Zhifang Sui']",http://arxiv.org/pdf/2301.00234v3.pdf,2022-12-31,," With the increasing ability of large language models (LLMs), in-contextlearning (ICL) has become a new paradigm for natural language processing (NLP),where LLMs make predictions only based on contexts augmented with a fewexamples. It has been a new trend to explore ICL to evaluate and extrapolatethe ability of LLMs. In this paper, we aim to survey and summarize the progressand challenges of ICL. We first present a formal definition of ICL and clarifyits correlation to related studies. Then, we organize and discuss advancedtechniques, including training strategies, demonstration designing strategies,as well as related analysis. Finally, we discuss the challenges of ICL andprovide potential directions for further research. We hope that our work canencourage more research on uncovering how ICL works and improving ICL.",,arXiv,"['cs.cl', 'cs.ai']",, towards fewshot identification of morality frames using incontext learning,"['Shamik Roy', 'Nishanth Sridhar Nakshatri', 'Dan Goldwasser']",http://arxiv.org/pdf/2302.02029v1.pdf,2023-02-03,," Data scarcity is a common problem in NLP, especially when the annotationpertains to nuanced socio-linguistic concepts that require specializedknowledge. As a result, few-shot identification of these concepts is desirable.Few-shot in-context learning using pre-trained Large Language Models (LLMs) hasbeen recently applied successfully in many NLP tasks. In this paper, we studyfew-shot identification of a psycho-linguistic concept, Morality Frames (Roy etal., 2021), using LLMs. Morality frames are a representation framework thatprovides a holistic view of the moral sentiment expressed in text, identifyingthe relevant moral foundation (Haidt and Graham, 2007) and at a finer level ofgranularity, the moral sentiment expressed towards the entities mentioned inthe text. Previous studies relied on human annotation to identify moralityframes in text which is expensive. In this paper, we propose prompting-basedapproaches using pretrained Large Language Models for identification ofmorality frames, relying only on few-shot exemplars. We compare our models'performance with few-shot RoBERTa and found promising results.",,arXiv,['cs.cl'],, openicl an opensource framework for incontext learning,"['Zhenyu Wu', 'YaoXiang Wang', 'Jiacheng Ye', 'Jiangtao Feng', 'Jingjing Xu', 'Yu Qiao', 'Zhiyong Wu']",http://arxiv.org/pdf/2303.02913v1.pdf,2023-03-06,," In recent years, In-context Learning (ICL) has gained increasing attentionand emerged as the new paradigm for large language model (LLM) evaluation.Unlike traditional fine-tuning methods, ICL instead adapts the pre-trainedmodels to unseen tasks without any parameter updates. However, theimplementation of ICL is sophisticated due to the diverse retrieval andinference methods involved, as well as the varying pre-processing requirementsfor different models, datasets, and tasks. A unified and flexible framework forICL is urgently needed to ease the implementation of the aforementionedcomponents. To facilitate ICL research, we introduce OpenICL, an open-sourcetoolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highlyflexible architecture that users can easily combine different components tosuit their needs. It also provides various state-of-the-art retrieval andinference methods to streamline the process of adapting ICL to cutting-edgeresearch. The effectiveness of OpenICL has been validated on a wide range ofNLP tasks, including classification, QA, machine translation, and semanticparsing. As a side-product, we found OpenICL to be an efficient yet robust toolfor LLMs evaluation. OpenICL is released athttps://github.com/Shark-NLP/OpenICL",,arXiv,['cs.cl'],, the scope of incontext learning for the extraction of medical temporal constraints,"['Parker Seegmiller', 'Joseph Gatto', 'Madhusudan Basak', 'Diane Cook', 'Hassan Ghasemzadeh', 'John Stankovic', 'Sarah Preum']",http://arxiv.org/pdf/2303.09366v2.pdf,2023-03-16,," Medications often impose temporal constraints on everyday patient activity.Violations of such medical temporal constraints (MTCs) lead to a lack oftreatment adherence, in addition to poor health outcomes and increasedhealthcare expenses. These MTCs are found in drug usage guidelines (DUGs) inboth patient education materials and clinical texts. Computationallyrepresenting MTCs in DUGs will advance patient-centric healthcare applicationsby helping to define safe patient activity patterns. We define a novel taxonomyof MTCs found in DUGs and develop a novel context-free grammar (CFG) basedmodel to computationally represent MTCs from unstructured DUGs. Additionally,we release three new datasets with a combined total of N = 836 DUGs labeledwith normalized MTCs. We develop an in-context learning (ICL) solution forautomatically extracting and normalizing MTCs found in DUGs, achieving anaverage F1 score of 0.62 across all datasets. Finally, we rigorouslyinvestigate ICL model performance against a baseline model, across datasets andMTC types, and through in-depth error analysis.",,arXiv,"['cs.cl', 'cs.lg']",, gptre incontext learning for relation extraction using large language models,"['Zhen Wan', 'Fei Cheng', 'Zhuoyuan Mao', 'Qianying Liu', 'Haiyue Song', 'Jiwei Li', 'Sadao Kurohashi']",http://arxiv.org/pdf/2305.02105v3.pdf,2023-05-03,," In spite of the potential for ground-breaking achievements offered by largelanguage models (LLMs) (e.g., GPT-3), they still lag significantly behindfully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE).This is due to the two major shortcomings of LLMs in RE: (1) low relevanceregarding entity and relation in retrieved demonstrations for in-contextlearning; and (2) the strong inclination to wrongly classify NULL examples intoother pre-defined labels. In this paper, we propose GPT-RE to bridge the gap between LLMs andfully-supervised baselines. GPT-RE successfully addresses the aforementionedissues by (1) incorporating task-specific entity representations indemonstration retrieval; and (2) enriching the demonstrations with goldlabel-induced reasoning logic. We evaluate GPT-RE on four widely-used REdatasets, and observe that GPT-RE achieves improvements over not only existingGPT-3 baselines, but also fully-supervised baselines. Specifically, GPT-REachieves SOTA performances on the Semeval and SciERC datasets, and competitiveperformances on the TACRED and ACE05 datasets.",,arXiv,['cs.cl'],, gersteinlab at mediqachat 2023 clinical note summarization from doctorpatient conversations through finetuning and incontext learning,"['Xiangru Tang', 'Andrew Tran', 'Jeffrey Tan', 'Mark Gerstein']",http://arxiv.org/pdf/2305.05001v1.pdf,2023-05-08,," This paper presents our contribution to the MEDIQA-2023 Dialogue2Note sharedtask, encompassing both subtask A and subtask B. We approach the task as adialogue summarization problem and implement two distinct pipelines: (a) afine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b)few-shot in-context learning (ICL) using a large language model, GPT-4. Bothmethods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1(deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421,respectively. Additionally, we predict the associated section headers usingRoBERTa and SciBERT based classification models. Our team ranked fourth amongall teams, while each team is allowed to submit three runs as part of theirsubmission. We also utilize expert annotations to demonstrate that the notesgenerated through the ICL GPT-4 are better than all other baselines. The codefor our submission is available.",,arXiv,['cs.cl'],, can we edit factual knowledge by incontext learning,"['Ce Zheng', 'Lei Li', 'Qingxiu Dong', 'Yuxuan Fan', 'Zhiyong Wu', 'Jingjing Xu', 'Baobao Chang']",http://arxiv.org/pdf/2305.12740v1.pdf,2023-05-22,," Previous studies have shown that large language models (LLMs) like GPTs storemassive factual knowledge in their parameters. However, the stored knowledgecould be false or out-dated. Traditional knowledge editing methods refine LLMsvia fine-tuning on texts containing specific knowledge. However, with theincreasing scales of LLMs, these gradient-based approaches bring largecomputation costs. The trend of model-as-a-service also makes it impossible tomodify knowledge in black-box LMs. Inspired by in-context learning (ICL), a newparadigm based on demonstration contexts without parameter updating, we explorewhether ICL can edit factual knowledge. To answer this question, we give acomprehensive empirical study of ICL strategies. Experiments show thatin-context knowledge editing (IKE), without any gradient and parameterupdating, achieves a competitive success rate compared to gradient-basedmethods on GPT-J (6B) but with much fewer side effects, including lessover-editing on similar but unrelated facts and less knowledge forgetting onpreviously stored knowledge. We also apply the method to larger LMs with tensor hundreds of parameters like OPT-175B, which shows the scalability of ourmethod. The code is available at https://github.com/Zce1112zslx/IKE.",,arXiv,['cs.cl'],, coveragebased example selection for incontext learning,"['Shivanshu Gupta', 'Matt Gardner', 'Sameer Singh']",http://arxiv.org/pdf/2305.14907v3.pdf,2023-05-24,," In-context learning (ICL), the ability of large language models to performnovel tasks by conditioning on a prompt with a few task examples, requiresthese examples to be informative about the test instance. The standard approachof independently ranking and selecting the most similar examples selectsredundant examples while omitting important information. In this work, we showthat BERTScore-Recall (BSR) selects better examples that demonstrate more ofthe salient aspects, e.g. reasoning patterns, of the test input. We furtherextend BSR and many standard metrics to easily optimizable set-level metrics,giving still better coverage of those salient aspects. On 15 datasets spanning6 tasks and with 7 diverse LLMs, we show that (1) BSR is the superior metricfor in-context example selection across the board, and (2) for compositionaltasks, set selection using Set-BSR outperforms independent ranking by up to 17points on average and, despite being training-free, surpasses methods thatleverage task or LLM-specific training.",,arXiv,['cs.cl'],, leveraging large language models for scalable vector graphicsdriven image understanding,"['Mu Cai', 'Zeyi Huang', 'Yuheng Li', 'Haohan Wang', 'Yong Jae Lee']",http://arxiv.org/pdf/2306.06094v1.pdf,2023-06-09,," Recently, large language models (LLMs) have made significant advancements innatural language understanding and generation. However, their potential incomputer vision remains largely unexplored. In this paper, we introduce a new,exploratory approach that enables LLMs to process images using the ScalableVector Graphics (SVG) format. By leveraging the XML-based textual descriptionsof SVG representations instead of raster images, we aim to bridge the gapbetween the visual and textual modalities, allowing LLMs to directly understandand manipulate images without the need for parameterized visual components. Ourmethod facilitates simple image classification, generation, and in-contextlearning using only LLM capabilities. We demonstrate the promise of ourapproach across discriminative and generative tasks, highlighting its (i)robustness against distribution shift, (ii) substantial improvements achievedby tapping into the in-context learning abilities of LLMs, and (iii) imageunderstanding and generation capabilities with human guidance. Our code, data,and models can be found here https://github.com/mu-cai/svg-llm.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, exploring the incontext learning ability of large language model for biomedical concept linking,"['Qinyong Wang', 'Zhenxiang Gao', 'Rong Xu']",http://arxiv.org/pdf/2307.01137v1.pdf,2023-07-03,," The biomedical field relies heavily on concept linking in various areas suchas literature mining, graph alignment, information retrieval,question-answering, data, and knowledge integration. Although large languagemodels (LLMs) have made significant strides in many natural language processingtasks, their effectiveness in biomedical concept mapping is yet to be fullyexplored. This research investigates a method that exploits the in-contextlearning (ICL) capabilities of large models for biomedical concept linking. Theproposed approach adopts a two-stage retrieve-and-rank framework. Initially,biomedical concepts are embedded using language models, and then embeddingsimilarity is utilized to retrieve the top candidates. These candidates'contextual information is subsequently incorporated into the prompt andprocessed by a large language model to re-rank the concepts. This approachachieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7%in chemical entity normalization, exhibiting a competitive performance relativeto supervised learning methods. Further, it showed a significant improvement,with an over 20-point absolute increase in F1 score on an oncology matchingdataset. Extensive qualitative assessments were conducted, and the benefits andpotential shortcomings of using large language models within the biomedicaldomain were discussed. were discussed.",,arXiv,"['cs.cl', 'cs.ai']",, learning to retrieve incontext examples for large language models,"['Liang Wang', 'Nan Yang', 'Furu Wei']",http://arxiv.org/pdf/2307.07164v2.pdf,2023-07-14,," Large language models (LLMs) have demonstrated their ability to learnin-context, allowing them to perform various tasks based on a few input-outputexamples. However, the effectiveness of in-context learning is heavily relianton the quality of the selected examples. In this paper, we propose a novelframework to iteratively train dense retrievers that can identify high-qualityin-context examples for LLMs. Our framework initially trains a reward modelbased on LLM feedback to evaluate the quality of candidate examples, followedby knowledge distillation to train a bi-encoder based dense retriever. Ourexperiments on a suite of $30$ tasks demonstrate that our frameworksignificantly enhances in-context learning performance. Furthermore, we showthe generalization ability of our framework to unseen tasks during training. Anin-depth analysis reveals that our model improves performance by retrievingexamples with similar patterns, and the gains are consistent across LLMs ofvarying sizes. The code and data are available athttps://github.com/microsoft/LMOps/tree/main/llm_retriever .",,arXiv,"['cs.cl', 'cs.ir']",, incontext learning learns label relationships but is not conventional learning,"['Jannik Kossen', 'Yarin Gal', 'Tom Rainforth']",http://arxiv.org/pdf/2307.12375v3.pdf,2023-07-23,," The predictions of Large Language Models (LLMs) on downstream tasks oftenimprove significantly when including examples of the input--label relationshipin the context. However, there is currently no consensus about how thisin-context learning (ICL) ability of LLMs works. For example, while Xie et al.(2021) liken ICL to a general-purpose learning algorithm, Min et al. (2022)argue ICL does not even learn label relationships from in-context examples. Inthis paper, we provide novel insights into how ICL leverages label information,revealing both capabilities and limitations. To ensure we obtain acomprehensive picture of ICL behavior, we study probabilistic aspects of ICLpredictions and thoroughly examine the dynamics of ICL as more examples areprovided. Our experiments show that ICL predictions almost always depend onin-context labels, and that ICL can learn truly novel tasks in-context.However, we also find that ICL struggles to fully overcome predictionpreferences acquired from pre-training data, and, further, that ICL does notconsider all in-context information equally.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, causallm is not optimal for incontext learning,"['Nan Ding', 'Tomer Levinboim', 'Jialin Wu', 'Sebastian Goodman', 'Radu Soricut']",http://arxiv.org/pdf/2308.06912v2.pdf,2023-08-14,," Recent empirical evidence indicates that transformer based in-contextlearning performs better when using a prefix language model (prefixLM), inwhich in-context samples can all attend to each other, compared to causallanguage models (causalLM), which use auto-regressive attention that prohibitsin-context samples to attend to future samples. While this result is intuitive,it is not understood from a theoretical perspective. In this paper we take atheoretical approach and analyze the convergence behavior of prefixLM andcausalLM under a certain parameter construction. Our analysis shows that bothLM types converge to their stationary points at a linear rate, but that whileprefixLM converges to the optimal solution of linear regression, causalLMconvergence dynamics follows that of an online gradient descent algorithm,which is not guaranteed to be optimal even as the number of samples growsinfinitely. We supplement our theoretical claims with empirical experimentsover synthetic and real tasks and using various types of transformers. Ourexperiments verify that causalLM consistently underperforms prefixLM in allsettings.",,arXiv,"['cs.lg', 'cs.cl']",, exploring demonstration ensembling for incontext learning,"['Muhammad Khalifa', 'Lajanugen Logeswaran', 'Moontae Lee', 'Honglak Lee', 'Lu Wang']",http://arxiv.org/pdf/2308.08780v2.pdf,2023-08-17,," In-context learning (ICL) operates by showing language models (LMs) examplesof input-output pairs for a given task, i.e., demonstrations. The standardapproach for ICL is to prompt the LM with concatenated demonstrations followedby the test input. This approach suffers from some issues. First, concatenationoffers almost no control over the contribution of each demo to the modelprediction. This can be sub-optimal when some demonstrations are irrelevant tothe test example. Second, due to the input length limit of some transformermodels, it might be infeasible to fit many examples into the context,especially when dealing with long-input tasks. In this work, we exploreDemonstration Ensembling (DENSE) as an alternative to simple concatenation.DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations andthen combines the output probabilities resulting from each subset to producethe final prediction. We study different ensembling methods using GPT-j andexperiment on 12 language tasks. Our experiments show weighted max ensemblingto outperform vanilla concatenation by as large as 2.4 average points. Codeavailable at https://github.com/mukhal/icl-ensembling.",,arXiv,"['cs.cl', 'cs.ai']",, context is environment,"['Sharut Gupta', 'Stefanie Jegelka', 'David Lopez-Paz', 'Kartik Ahuja']",http://arxiv.org/pdf/2309.09888v2.pdf,2023-09-18,," Two lines of work are taking the central stage in AI research. On the onehand, the community is making increasing efforts to build models that discardspurious correlations and generalize better in novel test environments.Unfortunately, the bitter lesson so far is that no proposal convincinglyoutperforms a simple empirical risk minimization baseline. On the other hand,large language models (LLMs) have erupted as algorithms able to learnin-context, generalizing on-the-fly to eclectic contextual circumstances thatusers enforce by means of prompting. In this paper, we argue that context isenvironment, and posit that in-context learning holds the key to better domaingeneralization. Via extensive theory and experiments, we show that payingattention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as theyarrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context RiskMinimization (ICRM) algorithm to zoom-in on the test environment riskminimizer, leading to significant out-of-distribution performance improvements.From all of this, two messages are worth taking home. Researchers in domaingeneralization should consider environment as context, and harness the adaptivepower of in-context learning. Researchers in LLMs should consider context asenvironment, to better structure data towards generalization.",,arXiv,"['cs.lg', 'cs.ai', 'stat.ml']",, "prompt, condition, and generate classification of unsupported claims with incontext learning","['Peter Ebert Christensen', 'Srishti Yadav', 'Serge Belongie']",http://arxiv.org/pdf/2309.10359v1.pdf,2023-09-19,," Unsupported and unfalsifiable claims we encounter in our daily lives caninfluence our view of the world. Characterizing, summarizing, and -- moregenerally -- making sense of such claims, however, can be challenging. In thiswork, we focus on fine-grained debate topics and formulate a new task ofdistilling, from such claims, a countable set of narratives. We present acrowdsourced dataset of 12 controversial topics, comprising more than 120karguments, claims, and comments from heterogeneous sources, each annotated witha narrative label. We further investigate how large language models (LLMs) canbe used to synthesise claims using In-Context Learning. We find that generatedclaims with supported evidence can be used to improve the performance ofnarrative classification models and, additionally, that the same model caninfer the stance and aspect using a few training examples. Such a model can beuseful in applications which rely on narratives , e.g. fact-checking.",,arXiv,['cs.cl'],, incontext learning for text classification with many labels,"['Aristides Milios', 'Siva Reddy', 'Dzmitry Bahdanau']",http://arxiv.org/pdf/2309.10954v2.pdf,2023-09-19,," In-context learning (ICL) using large language models for tasks with manylabels is challenging due to the limited context window, which makes itdifficult to fit a sufficient number of examples in the prompt. In this paper,we use a pre-trained dense retrieval model to bypass this limitation, givingthe model only a partial view of the full label space for each inference call.Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the artperformance in few-shot settings for three common intent classificationdatasets, with no finetuning. We also surpass fine-tuned performance onfine-grained sentiment classification in certain cases. We analyze theperformance across number of in-context examples and different model scales,showing that larger models are necessary to effectively and consistently makeuse of larger context lengths for ICL. By running several ablations, we analyzethe model's use of: a) the similarity of the in-context examples to the currentinput, b) the semantic content of the class names, and c) the correctcorrespondence between examples and labels. We demonstrate that all three areneeded to varying degrees depending on the domain, contrary to certain recentworks.",,arXiv,"['cs.cl', 'cs.lg']",, privacypreserving incontext learning with differentially private fewshot generation,"['Xinyu Tang', 'Richard Shin', 'Huseyin A. Inan', 'Andre Manoel', 'Fatemehsadat Mireshghallah', 'Zinan Lin', 'Sivakanth Gopi', 'Janardhan Kulkarni', 'Robert Sim']",http://arxiv.org/pdf/2309.11765v2.pdf,2023-09-21,," We study the problem of in-context learning (ICL) with large language models(LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leakor regurgitate the private examples demonstrated in the prompt. We propose anovel algorithm that generates synthetic few-shot demonstrations from theprivate dataset with formal differential privacy (DP) guarantees, and showempirically that it can achieve effective ICL. We conduct extensive experimentson standard benchmarks and compare our algorithm with non-private ICL andzero-shot solutions. Our results demonstrate that our algorithm can achievecompetitive performance with strong privacy levels. These results open up newpossibilities for ICL with privacy protection for a broad range ofapplications.",,arXiv,"['cs.lg', 'cs.cr']",, hrot hybrid prompt strategy and retrieval of thought for tabletext hybrid question answering,"['Tongxu Luo', 'Fangyu Lei', 'Jiahe Lei', 'Weihao Liu', 'Shihu He', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2309.12669v1.pdf,2023-09-22,," Answering numerical questions over hybrid contents from the given tables andtext(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs)have gained significant attention in the NLP community. With the emergence oflarge language models, In-Context Learning and Chain-of-Thought prompting havebecome two particularly popular research topics in this field. In this paper,we introduce a new prompting strategy called Hybrid prompt strategy andRetrieval of Thought for TextTableQA. Through In-Context Learning, we promptthe model to develop the ability of retrieval thinking when dealing with hybriddata. Our method achieves superior performance compared to the fully-supervisedSOTA on the MultiHiertt dataset in the few-shot setting.",,arXiv,['cs.cl'],, allure auditing and improving llmbased evaluation of text using iterative incontextlearning,"['Hosein Hasanbeig', 'Hiteshi Sharma', 'Leo Betthauser', 'Felipe Vieira Frujeri', 'Ida Momennejad']",http://arxiv.org/pdf/2309.13701v2.pdf,2023-09-24,," From grading papers to summarizing medical documents, large language models(LLMs) are evermore used for evaluation of text generated by humans and AIalike. However, despite their extensive utility, LLMs exhibit distinct failuremodes, necessitating a thorough audit and improvement of their text evaluationcapabilities. Here we introduce ALLURE, a systematic approach to Auditing LargeLanguage Models Understanding and Reasoning Errors. ALLURE involves comparingLLM-generated evaluations with annotated data, and iteratively incorporatinginstances of significant deviation into the evaluator, which leveragesin-context learning (ICL) to enhance and improve robust evaluation of text byLLMs. Through this iterative process, we refine the performance of theevaluator LLM, ultimately reducing reliance on human annotators in theevaluation process. We anticipate ALLURE to serve diverse applications of LLMsin various domains related to evaluation of textual data, such as medicalsummarization, education, and and productivity.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, dynamic demonstrations controller for incontext learning,"['Fei Zhao', 'Taotian Pang', 'Zhen Wu', 'Zheng Ma', 'Shujian Huang', 'Xinyu Dai']",http://arxiv.org/pdf/2310.00385v1.pdf,2023-09-30,," In-Context Learning (ICL) is a new paradigm for natural language processing(NLP), where a large language model (LLM) observes a small number ofdemonstrations and a test instance as its input, and directly makes predictionswithout updating model parameters. Previous studies have revealed that ICL issensitive to the selection and the ordering of demonstrations. However, thereare few studies regarding the impact of the demonstration number on the ICLperformance within a limited input length of LLM, because it is commonlybelieved that the number of demonstrations is positively correlated with modelperformance. In this paper, we found this conclusion does not always hold true.Through pilot experiments, we discover that increasing the number ofdemonstrations does not necessarily lead to improved performance. Building uponthis insight, we propose a Dynamic Demonstrations Controller (D$^2$Controller),which can improve the ICL performance by adjusting the number of demonstrationsdynamically. The experimental results show that D$^2$Controller yields a 5.4%relative improvement on eight different sizes of LLMs across ten datasets.Moreover, we also extend our method to previous ICL models and achievecompetitive results.",,arXiv,"['cs.cl', 'cs.ai']",, not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning,"['Zhe Yang', 'Damai Dai', 'Peiyi Wang', 'Zhifang Sui']",http://arxiv.org/pdf/2310.08309v1.pdf,2023-10-12,," Large Language Models (LLMs) have recently gained the In-Context Learning(ICL) ability with the models scaling up, allowing them to quickly adapt todownstream tasks with only a few demonstration examples prepended in the inputsequence. Nonetheless, the current practice of ICL treats all demonstrationexamples equally, which still warrants improvement, as the quality of examplesis usually uneven. In this paper, we investigate how to determine approximatelyoptimal weights for demonstration examples and how to apply them during ICL. Toassess the quality of weights in the absence of additional validation data, wedesign a masked self-prediction (MSP) score that exhibits a strong correlationwith the final ICL performance. To expedite the weight-searching process, wediscretize the continuous weight space and adopt beam search. Withapproximately optimal weights obtained, we further propose two strategies toapply them to demonstrations at different model positions. Experimental resultson 8 text classification tasks show that our approach outperforms conventionalICL by a large margin. Our code are publicly available athttps:github.com/Zhe-Young/WICL.",,arXiv,['cs.cl'],, how many pretraining tasks are needed for incontext learning of linear regression,"['Jingfeng Wu', 'Difan Zou', 'Zixiang Chen', 'Vladimir Braverman', 'Quanquan Gu', 'Peter L. Bartlett']",http://arxiv.org/pdf/2310.08391v1.pdf,2023-10-12,," Transformers pretrained on diverse tasks exhibit remarkable in-contextlearning (ICL) capabilities, enabling them to solve unseen tasks solely basedon input contexts without adjusting model parameters. In this paper, we studyICL in one of its simplest setups: pretraining a linearly parameterizedsingle-layer linear attention model for linear regression with a Gaussianprior. We establish a statistical task complexity bound for the attention modelpretraining, showing that effective pretraining only requires a small number ofindependent tasks. Furthermore, we prove that the pretrained model closelymatches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, byachieving nearly Bayes optimal risk on unseen tasks under a fixed contextlength. These theoretical findings complement prior experimental research andshed light on the statistical foundations of ICL.",,arXiv,"['stat.ml', 'cs.lg']",, generative calibration for incontext learning,"['Zhongtao Jiang', 'Yuanzhe Zhang', 'Cao Liu', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2310.10266v1.pdf,2023-10-16,," As one of the most exciting features of large language models (LLMs),in-context learning is a mixed blessing. While it allows users tofast-prototype a task solver with only a few training examples, the performanceis generally sensitive to various configurations of the prompt such as thechoice or order of the training examples. In this paper, we for the first timetheoretically and empirically identify that such a paradox is mainly due to thelabel shift of the in-context model to the data distribution, in which LLMsshift the label marginal $p(y)$ while having a good label conditional $p(x|y)$.With this understanding, we can simply calibrate the in-context predictivedistribution by adjusting the label marginal, which is estimated viaMonte-Carlo sampling over the in-context model, i.e., generation of LLMs. Wecall our approach as generative calibration. We conduct exhaustive experimentswith 12 text classification tasks and 12 LLMs scaling from 774M to 33B,generally find that the proposed method greatly and consistently outperformsthe ICL as well as state-of-the-art calibration methods, by up to 27% absolutein macro-F1. Meanwhile, the proposed method is also stable under differentprompt configurations.",,arXiv,['cs.cl'],, magnifico evaluating the incontext learning ability of large language models to generalize to novel interpretations,"['Arkil Patel', 'Satwik Bhattamishra', 'Siva Reddy', 'Dzmitry Bahdanau']",http://arxiv.org/pdf/2310.11634v1.pdf,2023-10-18,," Humans possess a remarkable ability to assign novel interpretations tolinguistic expressions, enabling them to learn new words and understandcommunity-specific connotations. However, Large Language Models (LLMs) have aknowledge cutoff and are costly to finetune repeatedly. Therefore, it iscrucial for LLMs to learn novel interpretations in-context. In this paper, wesystematically analyse the ability of LLMs to acquire novel interpretationsusing in-context learning. To facilitate our study, we introduce MAGNIFICo, anevaluation suite implemented within a text-to-SQL semantic parsing frameworkthat incorporates diverse tokens and prompt settings to simulate real-worldcomplexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit asurprisingly robust capacity for comprehending novel interpretations fromnatural language descriptions as well as from discussions within longconversations. Nevertheless, our findings also highlight the need for furtherimprovements, particularly when interpreting unfamiliar words or when composingmultiple novel interpretations simultaneously in the same example.Additionally, our analysis uncovers the semantic predispositions in LLMs andreveals the impact of recency bias for information presented in long contexts.",,arXiv,['cs.cl'],, which examples to annotate for incontext learning towards effective and efficient selection,"['Costas Mavromatis', 'Balasubramaniam Srinivasan', 'Zhengyuan Shen', 'Jiani Zhang', 'Huzefa Rangwala', 'Christos Faloutsos', 'George Karypis']",http://arxiv.org/pdf/2310.20046v1.pdf,2023-10-30,," Large Language Models (LLMs) can adapt to new tasks via in-context learning(ICL). ICL is efficient as it does not require any parameter updates to thetrained LLM, but only few annotated examples as input for the LLM. In thiswork, we investigate an active learning approach for ICL, where there is alimited budget for annotating examples. We propose a model-adaptiveoptimization-free algorithm, termed AdaICL, which identifies examples that themodel is uncertain about, and performs semantic diversity-based exampleselection. Diversity-based sampling improves overall effectiveness, whileuncertainty sampling improves budget efficiency and helps the LLM learn newinformation. Moreover, AdaICL poses its sampling strategy as a Maximum Coverageproblem, that dynamically adapts based on the model's feedback and can beapproximately solved via greedy algorithms. Extensive experiments on ninedatasets and seven LLMs show that AdaICL improves performance by 4.4% accuracypoints over SOTA (7.7% relative improvement), is up to 3x more budget-efficientthan performing annotations uniformly at random, while it outperforms SOTA with2x fewer ICL examples.",,arXiv,['cs.cl'],, crosslingual retrieval augmented incontext learning for bangla,"['Xiaoqian Li', 'Ercong Nie', 'Sheng Liang']",http://arxiv.org/pdf/2311.00587v2.pdf,2023-11-01,," The promise of Large Language Models (LLMs) in Natural Language Processinghas often been overshadowed by their limited performance in low-resourcelanguages such as Bangla. To address this, our paper presents a pioneeringapproach that utilizes cross-lingual retrieval augmented in-context learning.By strategically sourcing semantically similar prompts from high-resourcelanguage, we enable multilingual pretrained language models (MPLMs), especiallythe generative model BLOOMZ, to successfully boost performance on Bangla tasks.Our extensive evaluation highlights that the cross-lingual retrieval augmentedprompts bring steady improvements to MPLMs over the zero-shot performance.",,arXiv,['cs.cl'],, dail data augmentation for incontext learning via selfparaphrase,"['Dawei Li', 'Yaxuan Li', 'Dheeraj Mekala', 'Shuyao Li', 'Yulin wang', 'Xueqi Wang', 'William Hogan', 'Jingbo Shang']",http://arxiv.org/pdf/2311.03319v1.pdf,2023-11-06,," In-Context Learning (ICL) combined with pre-trained large language models hasachieved promising results on various NLP tasks. However, ICL requireshigh-quality annotated demonstrations which might not be available inreal-world scenarios. To overcome this limitation, we propose \textbf{D}ata\textbf{A}ugmentation for \textbf{I}n-Context \textbf{L}earning(\textbf{DAIL}). DAIL leverages the intuition that large language models aremore familiar with the content generated by themselves. It first utilizes thelanguage model to generate paraphrases of the test sample and employs majorityvoting to determine the final result based on individual predictions. Ourextensive empirical evaluation shows that DAIL outperforms the standard ICLmethod and other ensemble-based methods in the low-resource scenario.Additionally, we explore the use of voting consistency as a confidence score ofthe model when the logits of predictions are inaccessible. We believe our workwill stimulate further research on ICL in low-resource settings.",,arXiv,"['cs.cl', 'cs.ai']",, incontext exemplars as clues to retrieving from large associative memory,['Jiachen Zhao'],http://arxiv.org/pdf/2311.03498v2.pdf,2023-11-06,," Recently, large language models (LLMs) have made remarkable progress innatural language processing. The most representative ability of LLMs isin-context learning (ICL), which enables LLMs to learn patterns from in-contextexemplars without training. The performance of ICL greatly depends on theexemplars used. However, how to choose exemplars remains unclear due to thelack of understanding of how in-context learning works. In this paper, wepresent a novel perspective on ICL by conceptualizing it as contextualretrieval from a model of associative memory. We establish a theoreticalframework of ICL based on Hopfield Networks. Based on our framework, we lookinto how in-context exemplars influence the performance of ICL and propose moreefficient active exemplar selection. Our study sheds new light on the mechanismof ICL by connecting it to memory retrieval, with potential implications foradvancing the understanding of LLMs.",,arXiv,"['cs.cl', 'cs.lg']",, selective annotation makes language models better fewshot learners,"['Hongjin Su', 'Jungo Kasai', 'Chen Henry Wu', 'Weijia Shi', 'Tianlu Wang', 'Jiayi Xin', 'Rui Zhang', 'Mari Ostendorf', 'Luke Zettlemoyer', 'Noah A. Smith', 'Tao Yu']",http://arxiv.org/pdf/2209.01975v1.pdf,2022-09-05,," Many recent approaches to natural language tasks are built on the remarkableabilities of large language models. Large language models can performin-context learning, where they learn a new task from a few taskdemonstrations, without any parameter updates. This work examines theimplications of in-context learning for the creation of datasets for newnatural language tasks. Departing from recent in-context learning methods, weformulate an annotation-efficient, two-step framework: selective annotationthat chooses a pool of examples to annotate from unlabeled data in advance,followed by prompt retrieval that retrieves task examples from the annotatedpool at test time. Based on this framework, we propose an unsupervised,graph-based selective annotation method, voke-k, to select diverse,representative examples to annotate. Extensive experiments on 10 datasets(covering classification, commonsense reasoning, dialogue, and text/codegeneration) demonstrate that our selective annotation method improves the taskperformance by a large margin. On average, vote-k achieves a 12.9%/11.4%relative gain under an annotation budget of 18/100, as compared to randomlyselecting examples to annotate. Compared to state-of-the-art supervisedfinetuning approaches, it yields similar performance with 10-100x lessannotation cost across 10 tasks. We further analyze the effectiveness of ourframework in various scenarios: language models with varying sizes, alternativeselective annotation methods, and cases where there is a test data domainshift. We hope that our studies will serve as a basis for data annotations aslarge language models are increasingly applied to new tasks. Our code isavailable at https://github.com/HKUNLP/icl-selective-annotation.",,arXiv,['cs.cl'],, incontext example selection with influences,"['Tai Nguyen', 'Eric Wong']",http://arxiv.org/pdf/2302.11042v2.pdf,2023-02-21,," In-context learning (ICL) is a powerful paradigm emerged from large languagemodels (LLMs). Despite its promises, ICL performance is known to be highlysensitive to input examples. In this work, we use $\textit{in-contextinfluences}$ to analyze few-shot ICL performance directly from the in-contextexamples. Our proposed influence-based example selection method can identifyboth positive and negative examples, outperforming several baselines whenevaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\%$performance gap between using the most negative in-context examples compared tothe most positive. In a case study, we apply our influence-based framework toquantify the phenomena of recency bias in example ordering for few-shot ICL.",,arXiv,"['cs.cl', 'cs.lg']",, "tabular representation, noisy operators, and impacts on table structure understanding tasks in llms","['Ananya Singha', 'José Cambronero', 'Sumit Gulwani', 'Vu Le', 'Chris Parnin']",http://arxiv.org/pdf/2310.10358v1.pdf,2023-10-16,," Large language models (LLMs) are increasingly applied for tabular tasks usingin-context learning. The prompt representation for a table may play a role inthe LLMs ability to process the table. Inspired by prior work, we generate acollection of self-supervised structural tasks (e.g. navigate to a cell androw; transpose the table) and evaluate the performance differences when using 8formats. In contrast to past work, we introduce 8 noise operations inspired byreal-world messy data and adversarial inputs, and show that such operations canimpact LLM performance across formats for different structural understandingtasks.",,arXiv,"['cs.cl', 'cs.ai']",, evaluating the impact of model scale for compositional generalization in semantic parsing,"['Linlu Qiu', 'Peter Shaw', 'Panupong Pasupat', 'Tianze Shi', 'Jonathan Herzig', 'Emily Pitler', 'Fei Sha', 'Kristina Toutanova']",http://arxiv.org/pdf/2205.12253v2.pdf,2022-05-24,," Despite their strong performance on many tasks, pre-trained language modelshave been shown to struggle on out-of-distribution compositionalgeneralization. Meanwhile, recent work has shown considerable improvements onmany NLP tasks from model scaling. Can scaling up model size also improvecompositional generalization in semantic parsing? We evaluate encoder-decodermodels up to 11B parameters and decoder-only models up to 540B parameters, andcompare model scaling curves for three different methods for applying apre-trained language model to a new task: fine-tuning all parameters, prompttuning, and in-context learning. We observe that fine-tuning generally has flator negative scaling curves on out-of-distribution compositional generalizationin semantic parsing evaluations. In-context learning has positive scalingcurves, but is generally outperformed by much smaller fine-tuned models.Prompt-tuning can outperform fine-tuning, suggesting further potentialimprovements from scaling as it exhibits a more positive scaling curve.Additionally, we identify several error trends that vary with model scale. Forexample, larger models are generally better at modeling the syntax of theoutput space, but are also more prone to certain types of overfitting. Overall,our study highlights limitations of current techniques for effectivelyleveraging model scale for compositional generalization, while our analysisalso suggests promising directions for future work.",,arXiv,['cs.cl'],, controllable dialogue simulation with incontext learning,"['Zekun Li', 'Wenhu Chen', 'Shiyang Li', 'Hong Wang', 'Jing Qian', 'Xifeng Yan']",http://arxiv.org/pdf/2210.04185v4.pdf,2022-10-09,," Building dialogue systems requires a large corpus of annotated dialogues.Such datasets are usually created via crowdsourcing, which is expensive andtime-consuming. In this paper, we propose \textsc{Dialogic}, a novel dialoguesimulation method based on large language model in-context learning to automatedataset creation. Seeded with a few annotated dialogues, \textsc{Dialogic}automatically selects in-context examples for demonstration and prompts GPT-3to generate new dialogues and annotations in a controllable way. Our method canrapidly expand a small set of dialogue data with minimum or zero \textit{humaninvolvement} and \textit{parameter update} and is thus much more cost-efficientand time-saving than crowdsourcing. Experimental results on the MultiWOZdataset demonstrate that training a model on the simulated dialogues leads toeven better performance than using the same amount of human-generated dialoguesunder the challenging low-resource settings, with as few as 85 dialogues as aseed. When enough data is available, our method can still serve as an effectivedata augmentation method. Human evaluation results also show that our simulateddialogues have near-human fluency and annotation accuracy. The code and dataare available at \textbf{\url{https://github.com/Leezekun/dialogic}}.",,arXiv,"['cs.cl', 'cs.ai']",, xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing,"['Peng Shi', 'Rui Zhang', 'He Bai', 'Jimmy Lin']",http://arxiv.org/pdf/2210.13693v1.pdf,2022-10-25,," In-context learning using large language models has recently shown surprisingresults for semantic parsing tasks such as Text-to-SQL translation. PromptingGPT-3 or Codex using several examples of question-SQL pairs can produceexcellent results, comparable to state-of-the-art finetuning-based models.However, existing work primarily focuses on English datasets, and it is unknownwhether large language models can serve as competitive semantic parsers forother languages. To bridge this gap, our work focuses on cross-lingualText-to-SQL semantic parsing for translating non-English utterances into SQLqueries based on an English schema. We consider a zero-shot transfer learningsetting with the assumption that we do not have any labeled examples in thetarget language (but have annotated examples in English). This work introducesthe XRICL framework, which learns to retrieve relevant English exemplars for agiven query to construct prompts. We also include global translation exemplarsfor a target language to facilitate the translation process for large languagemodels. To systematically evaluate our model, we construct two new benchmarkdatasets, XSpider and XKaggle-dbqa, which include questions in Chinese,Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectivelyleverages large pre-trained language models to outperform existing baselines.Data and code are publicly available at https://github.com/Impavidity/XRICL.",,arXiv,['cs.cl'],, how many demonstrations do you need for incontext learning,"['Jiuhai Chen', 'Lichang Chen', 'Chen Zhu', 'Tianyi Zhou']",http://arxiv.org/pdf/2303.08119v3.pdf,2023-03-14,," Large language models (LLMs) are capable to perform complex reasoning byin-context learning (ICL) when provided with a few input-output demonstrations(demos) and more powerful when intermediate reasoning steps (""chain of thoughts(CoT)"") of the demos are given. Is it necessary to use multi-demo in ICL? Inthis paper, we study ICL using fewer demos for each test query on the tasksin~\cite{wei2022chain}. Surprisingly, we do not observe significant degradationwhen using only one randomly chosen demo. To study this phenomenon, for eachtest query, we categorize demos into ""correct demos"" leading to the correctanswer, and ""wrong demos"" resulting in wrong answers. Our analysis reveals aninherent bias in those widely studied datasets: most demos are correct for amajority of test queries, which explains the good performance of using onerandom demo. Moreover, ICL (with and w/o CoT) using only one correct demosignificantly outperforms all-demo ICL adopted by most previous works,indicating the weakness of LLMs in finding correct demo(s) for input queries,which is difficult to evaluate on the biased datasets. Furthermore, we observea counterintuitive behavior of ICL using multi-demo, i.e., its accuracydegrades(improves) when given more correct(wrong) demos. This implies that ICLcan be easily misguided by interference among demos and their spuriouscorrelations. Our analyses highlight several fundamental challenges that needto be addressed in LLMs training, ICL, and benchmark design.",,arXiv,['cs.ai'],, improving visual question answering models through robustness analysis and incontext learning with a chain of basic questions,"['Jia-Hong Huang', 'Modar Alfadly', 'Bernard Ghanem', 'Marcel Worring']",http://arxiv.org/pdf/2304.03147v1.pdf,2023-04-06,," Deep neural networks have been critical in the task of Visual QuestionAnswering (VQA), with research traditionally focused on improving modelaccuracy. Recently, however, there has been a trend towards evaluating therobustness of these models against adversarial attacks. This involves assessingthe accuracy of VQA models under increasing levels of noise in the input, whichcan target either the image or the proposed query question, dubbed the mainquestion. However, there is currently a lack of proper analysis of this aspectof VQA. This work proposes a new method that utilizes semantically relatedquestions, referred to as basic questions, acting as noise to evaluate therobustness of VQA models. It is hypothesized that as the similarity of a basicquestion to the main question decreases, the level of noise increases. Togenerate a reasonable noise level for a given main question, a pool of basicquestions is ranked based on their similarity to the main question, and thisranking problem is cast as a LASSO optimization problem. Additionally, thiswork proposes a novel robustness measure, R_score, and two basic questiondatasets to standardize the analysis of VQA model robustness. The experimentalresults demonstrate that the proposed evaluation method effectively analyzesthe robustness of VQA models. Moreover, the experiments show that in-contextlearning with a chain of basic questions can enhance model accuracy.",,arXiv,"['cs.cv', 'cs.ai']",, genegpt augmenting large language models with domain tools for improved access to biomedical information,"['Qiao Jin', 'Yifan Yang', 'Qingyu Chen', 'Zhiyong Lu']",http://arxiv.org/pdf/2304.09667v3.pdf,2023-04-19,," While large language models (LLMs) have been successfully applied to varioustasks, they still face challenges with hallucinations. Augmenting LLMs withdomain-specific tools such as database utilities can facilitate easier and moreprecise access to specialized knowledge. In this paper, we present GeneGPT, anovel method for teaching LLMs to use the Web APIs of the National Center forBiotechnology Information (NCBI) for answering genomics questions.Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIsby in-context learning and an augmented decoding algorithm that can detect andexecute API calls. Experimental results show that GeneGPT achievesstate-of-the-art performance on eight tasks in the GeneTuring benchmark with anaverage score of 0.83, largely surpassing retrieval-augmented LLMs such as thenew Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), aswell as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1)API demonstrations have good cross-task generalizability and are more usefulthan documentations for in-context learning; (2) GeneGPT can generalize tolonger chains of API calls and answer multi-hop questions in GeneHop, a noveldataset introduced in this work; (3) Different types of errors are enriched indifferent tasks, providing valuable insights for future improvements.",,arXiv,"['cs.cl', 'cs.ai', 'q-bio.gn']",, dinsql decomposed incontext learning of texttosql with selfcorrection,"['Mohammadreza Pourreza', 'Davood Rafiei']",http://arxiv.org/pdf/2304.11015v3.pdf,2023-04-21,," There is currently a significant gap between the performance of fine-tunedmodels and prompting approaches using Large Language Models (LLMs) on thechallenging task of text-to-SQL, as evaluated on datasets such as Spider. Toimprove the performance of LLMs in the reasoning process, we study howdecomposing the task into smaller sub-tasks can be effective. In particular, weshow that breaking down the generation problem into sub-problems and feedingthe solutions of those sub-problems into LLMs can be an effective approach forsignificantly improving their performance. Our experiments with three LLMs showthat this approach consistently improves their simple few-shot performance byroughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On theholdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9and the new SOTA at the time of this writing using our approach is 85.3. Ourapproach with in-context learning beats many heavily fine-tuned models by atleast 5%. Additionally, when evaluated on the BIRD benchmark, our approachachieved an execution accuracy of 55.9%, setting a new SOTA on its holdout testset.",,arXiv,"['cs.cl', 'cs.ai', 'cs.db', 'cs.hc']",, fewshot incontext learning for knowledge base question answering,"['Tianle Li', 'Xueguang Ma', 'Alex Zhuang', 'Yu Gu', 'Yu Su', 'Wenhu Chen']",http://arxiv.org/pdf/2305.01750v2.pdf,2023-05-02,," Question answering over knowledge bases is considered a difficult problem dueto the challenge of generalizing to a wide variety of possible natural languagequestions. Additionally, the heterogeneity of knowledge base schema itemsbetween different knowledge bases often necessitates specialized training fordifferent knowledge base question-answering (KBQA) datasets. To handlequestions over diverse KBQA datasets with a unified training-free framework, wepropose KB-BINDER, which for the first time enables few-shot in-contextlearning over KBQA tasks. Firstly, KB-BINDER leverages large language modelslike Codex to generate logical forms as the draft for a specific question byimitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledgebase to bind the generated draft to an executable one with BM25 score matching.The experimental results on four public heterogeneous KBQA datasets show thatKB-BINDER can achieve a strong performance with only a few in-contextdemonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can evenoutperform the state-of-the-art trained models. On GrailQA and WebQSP, ourmodel is also on par with other fully-trained models. We believe KB-BINDER canserve as an important baseline for future research. Our code is available athttps://github.com/ltl3A87/KB-BINDER.",,arXiv,"['cs.cl', 'cs.ai']",, text classification via large language models,"['Xiaofei Sun', 'Xiaoya Li', 'Jiwei Li', 'Fei Wu', 'Shangwei Guo', 'Tianwei Zhang', 'Guoyin Wang']",http://arxiv.org/pdf/2305.08377v3.pdf,2023-05-15,," Despite the remarkable success of large-scale Language Models (LLMs) such asGPT-3, their performances still significantly underperform fine-tuned models inthe task of text classification. This is due to (1) the lack of reasoningability in addressing complex linguistic phenomena (e.g., intensification,contrast, irony etc); (2) limited number of tokens allowed in in-contextlearning. In this paper, we introduce Clue And Reasoning Prompting (CARP). CARP adoptsa progressive reasoning strategy tailored to addressing the complex linguisticphenomena involved in text classification: CARP first prompts LLMs to findsuperficial clues (e.g., keywords, tones, semantic relations, references, etc),based on which a diagnostic reasoning process is induced for final decisions.To further address the limited-token issue, CARP uses a fine-tuned model on thesupervised dataset for $k$NN demonstration search in the in-context learning,allowing the model to take the advantage of both LLM's generalization abilityand the task-specific evidence provided by the full labeled dataset.Remarkably, CARP yields new SOTA performances on 4 out of 5 widely-usedtext-classification benchmarks, 97.39 (+1.24) on SST-2, 96.40 (+0.72) onAGNews, 98.78 (+0.25) on R8 and 96.95 (+0.6) on R52, and a performancecomparable to SOTA on MR (92.39 v.s. 93.3). More importantly, we find that CARPdelivers impressive abilities on low-resource and domain-adaptation setups.Specifically, using 16 examples per class, CARP achieves comparableperformances to supervised models with 1,024 examples per class.",,arXiv,['cs.cl'],, exploring incontext learning capabilities of foundation models for generating knowledge graphs from text,"['Hanieh Khorashadizadeh', 'Nandana Mihindukulasooriya', 'Sanju Tiwari', 'Jinghua Groppe', 'Sven Groppe']",http://arxiv.org/pdf/2305.08804v1.pdf,2023-05-15,," Knowledge graphs can represent information about the real-world usingentities and their relations in a structured and semantically rich manner andthey enable a variety of downstream applications such as question-answering,recommendation systems, semantic search, and advanced analytics. However, atthe moment, building a knowledge graph involves a lot of manual effort and thushinders their application in some situations and the automation of this processmight benefit especially for small organizations. Automatically generatingstructured knowledge graphs from a large volume of natural language is still achallenging task and the research on sub-tasks such as named entity extraction,relation extraction, entity and relation linking, and knowledge graphconstruction aims to improve the state of the art of automatic construction andcompletion of knowledge graphs from text. The recent advancement of foundationmodels with billions of parameters trained in a self-supervised manner withlarge volumes of training data that can be adapted to a variety of downstreamtasks has helped to demonstrate high performance on a large range of NaturalLanguage Processing (NLP) tasks. In this context, one emerging paradigm isin-context learning where a language model is used as it is with a prompt thatprovides instructions and some examples to perform a task without changing theparameters of the model using traditional approaches such as fine-tuning. Thisway, no computing resources are needed for re-training/fine-tuning the modelsand the engineering effort is minimal. Thus, it would be beneficial to utilizesuch capabilities for generating knowledge graphs from text.",,arXiv,['cs.cl'],, what incontext learning learns incontext disentangling task recognition and task learning,"['Jane Pan', 'Tianyu Gao', 'Howard Chen', 'Danqi Chen']",http://arxiv.org/pdf/2305.09731v1.pdf,2023-05-16,," Large language models (LLMs) exploit in-context learning (ICL) to solve taskswith only a few demonstrations, but its mechanisms are not yet well-understood.Some works suggest that LLMs only recall already learned concepts frompre-training, while others hint that ICL performs implicit learning overdemonstrations. We characterize two ways through which ICL leveragesdemonstrations. Task recognition (TR) captures the extent to which LLMs canrecognize a task through demonstrations -- even without ground-truth labels --and apply their pre-trained priors, whereas task learning (TL) is the abilityto capture new input-label mappings unseen in pre-training. Using a wide rangeof classification datasets and three LLM families (GPT-3, LLaMA and OPT), wedesign controlled experiments to disentangle the roles of TR and TL in ICL. Weshow that (1) models can achieve non-trivial performance with only TR, and TRdoes not further improve with larger models or more demonstrations; (2) LLMsacquire TL as the model scales, and TL's performance consistently improves withmore demonstrations in context. Our findings unravel two different forcesbehind ICL and we advocate for discriminating them in future ICL research dueto their distinct nature.",,arXiv,"['cs.cl', 'cs.lg']",, temporal knowledge graph forecasting without knowledge using incontext learning,"['Dong-Ho Lee', 'Kian Ahrabian', 'Woojeong Jin', 'Fred Morstatter', 'Jay Pujara']",http://arxiv.org/pdf/2305.10613v3.pdf,2023-05-17,," Temporal knowledge graph (TKG) forecasting benchmarks challenge models topredict future facts using knowledge of past facts. In this paper, we applylarge language models (LLMs) to these benchmarks using in-context learning(ICL). We investigate whether and to what extent LLMs can be used for TKGforecasting, especially without any fine-tuning or explicit modules forcapturing structural and temporal information. For our experiments, we presenta framework that converts relevant historical facts into prompts and generatesranked predictions using token probabilities. Surprisingly, we observe thatLLMs, out-of-the-box, perform on par with state-of-the-art TKG models carefullydesigned and trained for TKG forecasting. Our extensive evaluation presentsperformances across several models and datasets with different characteristics,compares alternative heuristics for preparing contextual information, andcontrasts to prominent TKG methods and simple frequency and recency baselines.We also discover that using numerical indices instead of entity/relation names,i.e., hiding semantic information, does not significantly affect theperformance ($\pm$0.4\% Hit@1). This shows that prior semantic knowledge isunnecessary; instead, LLMs can leverage the existing patterns in the context toachieve such performance. Our analysis also reveals that ICL enables LLMs tolearn irregular patterns from the historical context, going beyond simplepredictions based on common or recent information.",,arXiv,['cs.cl'],, learning incontext learning for named entity recognition,"['Jiawei Chen', 'Yaojie Lu', 'Hongyu Lin', 'Jie Lou', 'Wei Jia', 'Dai Dai', 'Hua Wu', 'Boxi Cao', 'Xianpei Han', 'Le Sun']",http://arxiv.org/pdf/2305.11038v3.pdf,2023-05-18,," Named entity recognition in real-world applications suffers from thediversity of entity types, the emergence of new entity types, and the lack ofhigh-quality annotations. To address the above problems, this paper proposes anin-context learning-based NER approach, which can effectively inject in-contextNER ability into PLMs and recognize entities of novel types on-the-fly usingonly a few demonstrative instances. Specifically, we model PLMs as ameta-function $\mathcal{ \lambda_ {\text{instruction, demonstrations, text}}.M}$, and a new entity extractor can be implicitly constructed by applying newinstruction and demonstrations to PLMs, i.e., $\mathcal{ (\lambda . M)}$(instruction, demonstrations) $\to$ $\mathcal{F}$ where $\mathcal{F}$ will bea new entity extractor, i.e., $\mathcal{F}$: text $\to$ entities. To inject theabove in-context NER ability into PLMs, we propose a meta-function pre-trainingalgorithm, which pre-trains PLMs by comparing the (instruction,demonstration)-initialized extractor with a surrogate golden extractor.Experimental results on 4 few-shot NER datasets show that our method caneffectively inject in-context NER ability into PLMs and significantlyoutperforms the PLMs+fine-tuning counterparts.",,arXiv,['cs.cl'],, plugmed improving specificity in patientcentered medical dialogue generation using incontext learning,"['Chengfeng Dou', 'Zhi Jin', 'Wenping Jiao', 'Haiyan Zhao', 'Zhenwei Tao', 'Yongqiang Zhao']",http://arxiv.org/pdf/2305.11508v2.pdf,2023-05-19,," The patient-centered medical dialogue systems strive to offer diagnosticinterpretation services to users who are less knowledgeable about medicalknowledge, through emphasizing the importance of providing responses specificto the patients. It is difficult for the large language models (LLMs) toguarantee the specificity of responses in spite of its promising performanceeven in some tasks in medical field. Inspired by in-context learning, wepropose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing thischallenge. PlugMed is equipped with two modules, the prompt generation (PG)module and the response ranking (RR) module, to enhances LLMs' dialoguestrategies for improving the specificity of the dialogue. The PG module isdesigned to stimulate the imitative ability of LLMs by providing them with realdialogues from similar patients as prompts. The RR module incorporatesfine-tuned small model as response filter to enable the selection ofappropriate responses generated by LLMs. Furthermore, we introduce a newevaluation method based on matching both user's intent and high-frequencymedical term to effectively assess the specificity of the responses. We conductexperimental evaluations on three medical dialogue datasets, and the results,including both automatic and human evaluation, demonstrate the effectiveness ofour approach.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, toolkengpt augmenting frozen language models with massive tools via tool embeddings,"['Shibo Hao', 'Tianyang Liu', 'Zhen Wang', 'Zhiting Hu']",http://arxiv.org/pdf/2305.11554v4.pdf,2023-05-19,," Augmenting large language models (LLMs) with external tools has emerged as apromising approach to solving complex problems. However, traditional methods,which finetune LLMs with tool demonstration data, can be both costly andrestricted to a predefined set of tools. Recent in-context learning paradigmalleviates these issues, but the limited context length only allows for a fewshots of demonstrations, leading to suboptimal understandings of the tools.Moreover, when there are numerous tools to choose from, in-context learningcould completely fail to work. In this paper, we propose an alternativeapproach, $\textbf{ToolkenGPT}$, which combines the benefits of both sides. Ourapproach represents each $\underline{tool}$ as a to$\underline{ken}$($\textit{toolken}$) and learns an embedding for it, enabling tool calls in thesame way as generating a regular word token. Once a toolken is triggered, theLLM is prompted to complete arguments for the tool to execute. ToolkenGPToffers the flexibility to plug in an arbitrary number of tools by expanding theset of toolkens on the fly. In addition, it improves tool use by allowingextensive demonstration data for learning the toolken embeddings. In diversedomains, including numerical reasoning, knowledge-based question answering, andembodied plan generation, our approach effectively augments LLMs with tools andsubstantially outperforms various latest baselines. ToolkenGPT demonstrates thepromising ability to use relevant tools from a large tool set in complexscenarios.",,arXiv,"['cs.cl', 'cs.lg']",, measuring inductive biases of incontext learning with underspecified demonstrations,"['Chenglei Si', 'Dan Friedman', 'Nitish Joshi', 'Shi Feng', 'Danqi Chen', 'He He']",http://arxiv.org/pdf/2305.13299v1.pdf,2023-05-22,," In-context learning (ICL) is an important paradigm for adapting largelanguage models (LLMs) to new tasks, but the generalization behavior of ICLremains poorly understood. We investigate the inductive biases of ICL from theperspective of feature bias: which feature ICL is more likely to use given aset of underspecified demonstrations in which two features are equallypredictive of the labels. First, we characterize the feature biases of GPT-3models by constructing underspecified demonstrations from a range of NLPdatasets and feature combinations. We find that LLMs exhibit clear featurebiases - for example, demonstrating a strong bias to predict labels accordingto sentiment rather than shallow lexical features, like punctuation. Second, weevaluate the effect of different interventions that are designed to impose aninductive bias in favor of a particular feature, such as adding a naturallanguage instruction or using semantically relevant label words. We find that,while many interventions can influence the learner to prefer a particularfeature, it can be difficult to overcome strong prior biases. Overall, ourresults provide a broader picture of the types of features that ICL may be morelikely to exploit and how to impose inductive biases that are better alignedwith the intended task.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, buffet benchmarking large language models for fewshot crosslingual transfer,"['Akari Asai', 'Sneha Kudugunta', 'Xinyan Velocity Yu', 'Terra Blevins', 'Hila Gonen', 'Machel Reid', 'Yulia Tsvetkov', 'Sebastian Ruder', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2305.14857v1.pdf,2023-05-24,," Despite remarkable advancements in few-shot generalization in naturallanguage processing, most models are developed and evaluated primarily inEnglish. To facilitate research on few-shot cross-lingual transfer, weintroduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across54 languages in a sequence-to-sequence format and provides a fixed set offew-shot examples and instructions. BUFFET is designed to establish a rigorousand equitable evaluation framework for few-shot cross-lingual transfer across abroad range of tasks and languages. Using BUFFET, we perform thoroughevaluations of state-of-the-art multilingual large language models withdifferent transfer methods, namely in-context learning and fine-tuning. Ourfindings reveal significant room for improvement in few-shot in-contextcross-lingual transfer. In particular, ChatGPT with in-context learning oftenperforms worse than much smaller mT5-base models fine-tuned on English taskdata and few-shot in-language examples. Our analysis suggests various avenuesfor future research in few-shot cross-lingual transfer, such as improvedpretraining, understanding, and future evaluations.",,arXiv,['cs.cl'],, measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing,"['Shufan Wang', 'Sebastien Jean', 'Sailik Sengupta', 'James Gung', 'Nikolaos Pappas', 'Yi Zhang']",http://arxiv.org/pdf/2305.15338v1.pdf,2023-05-24,," In executable task-oriented semantic parsing, the system aims to translateusers' utterances in natural language to machine-interpretable programs (APIcalls) that can be executed according to pre-defined API specifications. Withthe popularity of Large Language Models (LLMs), in-context learning offers astrong baseline for such scenarios, especially in data-limited regimes.However, LLMs are known to hallucinate and therefore pose a formidablechallenge in constraining generated content. Thus, it remains uncertain if LLMscan effectively perform task-oriented utterance-to-API generation whererespecting API's structural and task-specific constraints is crucial. In this work, we seek to measure, analyze and mitigate such constraintsviolations. First, we identify the categories of various constraints inobtaining API-semantics from task-oriented utterances, and define fine-grainedmetrics that complement traditional ones. Second, we leverage these metrics toconduct a detailed error analysis of constraints violations seen instate-of-the-art LLMs, which motivates us to investigate two mitigationstrategies: Semantic-Retrieval of Demonstrations (SRD) and API-awareConstrained Decoding (API-CD). Our experiments show that these strategies areeffective at reducing constraints violations and improving the quality of thegenerated API calls, but require careful consideration given theirimplementation complexity and latency.",,arXiv,"['cs.ai', 'cs.cl']",, what can large language models do in chemistry a comprehensive benchmark on eight tasks,"['Taicheng Guo', 'Kehan Guo', 'Bozhao Nan', 'Zhenwen Liang', 'Zhichun Guo', 'Nitesh V. Chawla', 'Olaf Wiest', 'Xiangliang Zhang']",http://arxiv.org/pdf/2305.18365v3.pdf,2023-05-27,," Large Language Models (LLMs) with strong abilities in natural languageprocessing tasks have emerged and have been applied in various kinds of areassuch as science, finance and software engineering. However, the capability ofLLMs to advance the field of chemistry remains unclear. In this paper, ratherthan pursuing state-of-the-art performance, we aim to evaluate capabilities ofLLMs in a wide range of tasks across the chemistry domain. We identify threekey chemistry-related capabilities including understanding, reasoning andexplaining to explore in LLMs and establish a benchmark containing eightchemistry tasks. Our analysis draws on widely recognized datasets facilitatinga broad exploration of the capacities of LLMs within the context of practicalchemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) areevaluated for each chemistry task in zero-shot and few-shot in-context learningsettings with carefully selected demonstration examples and specially craftedprompts. Our investigation found that GPT-4 outperformed other models and LLMsexhibit different competitive levels in eight chemistry tasks. In addition tothe key findings from the comprehensive benchmark analysis, our work providesinsights into the limitation of current LLMs and the impact of in-contextlearning settings on LLMs' performance across various chemistry tasks. The codeand datasets used in this study are available athttps://github.com/ChemFoundationModels/ChemLLMBench.",,arXiv,"['cs.cl', 'cs.ai']",, mitigating label biases for incontext learning,"['Yu Fei', 'Yifan Hou', 'Zeming Chen', 'Antoine Bosselut']",http://arxiv.org/pdf/2305.19148v3.pdf,2023-05-28,," Various design settings for in-context learning (ICL), such as the choice andorder of the in-context examples, can bias a model toward a particularprediction without being reflective of an understanding of the task. While manystudies discuss these design choices, there have been few systematicinvestigations into categorizing them and mitigating their impact. In thiswork, we define a typology for three types of label biases in ICL for textclassification: vanilla-label bias, context-label bias, and domain-label bias(which we conceptualize and detect for the first time). Our analysis demonstrates that prior label bias calibration methods fallshort of addressing all three types of biases. Specifically, domain-label biasrestricts LLMs to random-level performance on many tasks regardless of thechoice of in-context examples. To mitigate the effect of these biases, wepropose a simple bias calibration method that estimates a language model'slabel bias using random in-domain words from the task corpus. After controllingfor this estimated bias when making predictions, our novel domain-contextcalibration significantly improves the ICL performance of GPT-J and GPT-3 on awide range of tasks. The gain is substantial on tasks with large domain-labelbias (up to 37% in Macro-F1). Furthermore, our results generalize to modelswith different scales, pretraining methods, and manually-designed taskinstructions, showing the prevalence of label biases in ICL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, pretraining task diversity and the emergence of nonbayesian incontext learning for regression,"['Allan Raventós', 'Mansheej Paul', 'Feng Chen', 'Surya Ganguli']",http://arxiv.org/pdf/2306.15063v2.pdf,2023-06-26,," Pretrained transformers exhibit the remarkable ability of in-context learning(ICL): they can learn tasks from just a few examples provided in the promptwithout updating any weights. This raises a foundational question: can ICLsolve fundamentally $\textit{new}$ tasks that are very different from thoseseen during pretraining? To probe this question, we examine ICL's performanceon linear regression while varying the diversity of tasks in the pretrainingdataset. We empirically demonstrate a $\textit{task diversity threshold}$ forthe emergence of ICL. Below this threshold, the pretrained transformer cannotsolve unseen regression tasks, instead behaving like a Bayesian estimator withthe $\textit{non-diverse pretraining task distribution}$ as the prior. Beyondthis threshold, the transformer significantly outperforms this estimator; itsbehavior aligns with that of ridge regression, corresponding to a Gaussianprior over $\textit{all tasks}$, including those not seen during pretraining.Thus, when pretrained on data with task diversity greater than the threshold,transformers $\textit{can}$ optimally solve fundamentally new tasks in-context.Importantly, this capability hinges on it deviating from the Bayes optimalestimator with the pretraining distribution as the prior. This study alsoexplores the effect of regularization, model capacity and task structure andunderscores, in a concrete example, the critical role of task diversity,alongside data and model scale, in the emergence of ICL. Code is available athttps://github.com/mansheej/icl-task-diversity.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, understanding incontext learning via supportive pretraining data,"['Xiaochuang Han', 'Daniel Simig', 'Todor Mihaylov', 'Yulia Tsvetkov', 'Asli Celikyilmaz', 'Tianlu Wang']",http://arxiv.org/pdf/2306.15091v1.pdf,2023-06-26,," In-context learning (ICL) improves language models' performance on a varietyof NLP tasks by simply demonstrating a handful of examples at inference time.It is not well understood why ICL ability emerges, as the model has never beenspecifically trained on such demonstrations. Unlike prior work that exploresimplicit mechanisms behind ICL, we study ICL via investigating the pretrainingdata. Specifically, we first adapt an iterative, gradient-based approach tofind a small subset of pretraining data that supports ICL. We observe that acontinued pretraining on this small subset significantly improves the model'sICL ability, by up to 18%. We then compare the supportive subset constrastivelywith random subsets of pretraining data and discover: (1) The supportivepretraining data to ICL do not have a higher domain relevance to downstreamtasks. (2) The supportive pretraining data have a higher mass of rarelyoccurring, long-tail tokens. (3) The supportive pretraining data arechallenging examples where the information gain from long-range context isbelow average, indicating learning to incorporate difficult long-range contextencourages ICL. Our work takes a first step towards understanding ICL viaanalyzing instance-level pretraining data. Our insights have a potential toenhance the ICL ability of language models by actively guiding the constructionof pretraining data in the future.",,arXiv,['cs.cl'],, schemalearning and rebinding as mechanisms of incontext learning and emergence,"['Sivaramakrishnan Swaminathan', 'Antoine Dedieu', 'Rajkumar Vasudeva Raju', 'Murray Shanahan', 'Miguel Lazaro-Gredilla', 'Dileep George']",http://arxiv.org/pdf/2307.01201v1.pdf,2023-06-16,," In-context learning (ICL) is one of the most powerful and most unexpectedcapabilities to emerge in recent transformer-based large language models(LLMs). Yet the mechanisms that underlie it are poorly understood. In thispaper, we demonstrate that comparable ICL capabilities can be acquired by analternative sequence prediction learning method using clone-structured causalgraphs (CSCGs). Moreover, a key property of CSCGs is that, unliketransformer-based LLMs, they are {\em interpretable}, which considerablysimplifies the task of explaining how ICL works. Specifically, we show that ituses a combination of (a) learning template (schema) circuits for patterncompletion, (b) retrieving relevant templates in a context-sensitive manner,and (c) rebinding of novel tokens to appropriate slots in the templates. We goon to marshall evidence for the hypothesis that similar mechanisms underlie ICLin LLMs. For example, we find that, with CSCGs as with LLMs, differentcapabilities emerge at different levels of overparameterization, suggestingthat overparameterization helps in learning more complex template (schema)circuits. By showing how ICL can be achieved with small models and datasets, weopen up a path to novel architectures, and take a vital step towards a moregeneral understanding of the mechanics behind this important capability.",,arXiv,"['cs.cl', 'cs.ai']",, towards understanding incontext learning with contrastive demonstrations and saliency maps,"['Paiheng Xu', 'Fuxiao Liu', 'Zongxia Li', 'Hyemi Song']",http://arxiv.org/pdf/2307.05052v2.pdf,2023-07-11,," We investigate the role of various demonstration components in the in-contextlearning (ICL) performance of large language models (LLMs). Specifically, weexplore the impacts of ground-truth labels, input distribution, andcomplementary explanations, particularly when these are altered or perturbed.We build on previous work, which offers mixed findings on how these elementsinfluence ICL. To probe these questions, we employ explainable NLP (XNLP)methods and utilize saliency maps of contrastive demonstrations for bothqualitative and quantitative analysis. Our findings reveal that flippingground-truth labels significantly affects the saliency, though it's morenoticeable in larger LLMs. Our analysis of the input distribution at a granularlevel reveals that changing sentiment-indicative terms in a sentiment analysistask to neutral ones does not have as substantial an impact as alteringground-truth labels. Finally, we find that the effectiveness of complementaryexplanations in boosting ICL performance is task-dependent, with limitedbenefits seen in sentiment analysis tasks compared to symbolic reasoning tasks.These insights are critical for understanding the functionality of LLMs andguiding the development of effective demonstrations, which is increasinglyrelevant in light of the growing use of LLMs in applications such as ChatGPT.Our research code is publicly available at https://github.com/paihengxu/XICL.",,arXiv,"['cs.cl', 'cs.ai']",, lorahub efficient crosstask generalization via dynamic lora composition,"['Chengsong Huang', 'Qian Liu', 'Bill Yuchen Lin', 'Tianyu Pang', 'Chao Du', 'Min Lin']",http://arxiv.org/pdf/2307.13269v2.pdf,2023-07-25,," Low-rank adaptations (LoRA) are often employed to fine-tune large languagemodels (LLMs) for new tasks. This paper investigates LoRA composability forcross-task generalization and introduces LoraHub, a simple framework devisedfor the purposive assembly of LoRA modules trained on diverse given tasks, withthe objective of achieving adaptable performance on unseen tasks. With just afew examples from a new task, LoraHub can fluidly combine multiple LoRAmodules, eliminating the need for human expertise and assumptions. Notably, thecomposition requires neither additional model parameters nor gradients.Empirical results on the Big-Bench Hard benchmark suggest that LoraHub, whilenot surpassing the performance of in-context learning, offers a notableperformance-efficiency trade-off in few-shot scenarios by employing asignificantly reduced number of tokens per example during inference. Notably,LoraHub establishes a better upper bound compared to in-context learning whenpaired with different demonstration examples, demonstrating its potential forfuture development. Our vision is to establish a platform for LoRA modules,empowering users to share their trained LoRA modules. This collaborativeapproach facilitates the seamless application of LoRA modules to novel tasks,contributing to an adaptive ecosystem. Our code is available athttps://github.com/sail-sg/lorahub, and all the pre-trained LoRA modules arereleased at https://huggingface.co/lorahub.",,arXiv,"['cs.cl', 'cs.ai']",, ambiguityaware incontext learning with large language models,"['Lingyu Gao', 'Aditi Chaudhary', 'Krishna Srinivasan', 'Kazuma Hashimoto', 'Karthik Raman', 'Michael Bendersky']",http://arxiv.org/pdf/2309.07900v2.pdf,2023-09-14,," In-context learning (ICL) i.e. showing LLMs only a few task-specificdemonstrations has led to downstream gains with no task-specific fine-tuningrequired. However, LLMs are sensitive to the choice of prompts, and therefore acrucial research question is how to select good demonstrations for ICL. Oneeffective strategy is leveraging semantic similarity between the ICLdemonstrations and test inputs by using a text retriever, which however issub-optimal as that does not consider the LLM's existing knowledge about thattask. From prior work (Lyu et al., 2023), we already know that labels pairedwith the demonstrations bias the model predictions. This leads us to ourhypothesis whether considering LLM's existing knowledge about the task,especially with respect to the output label space can help in a betterdemonstration selection strategy. Through extensive experimentation on threetext classification tasks, we find that it is beneficial to not only choosesemantically similar ICL demonstrations but also to choose those demonstrationsthat help resolve the inherent label ambiguity surrounding the test example.Interestingly, we find that including demonstrations that the LLM previouslymis-classified and also fall on the test example's decision boundary, bringsthe most performance gain.",,arXiv,"['cs.cl', 'cs.ir']",, understanding incontext learning in transformers and llms by learning to learn discrete functions,"['Satwik Bhattamishra', 'Arkil Patel', 'Phil Blunsom', 'Varun Kanade']",http://arxiv.org/pdf/2310.03016v1.pdf,2023-10-04,," In order to understand the in-context learning phenomenon, recent works haveadopted a stylized experimental framework and demonstrated that Transformerscan learn gradient-based learning algorithms for various classes of real-valuedfunctions. However, the limitations of Transformers in implementing learningalgorithms, and their ability to learn other forms of algorithms are not wellunderstood. Additionally, the degree to which these capabilities are confinedto attention-based models is unclear. Furthermore, it remains to be seenwhether the insights derived from these stylized settings can be extrapolatedto pretrained Large Language Models (LLMs). In this work, we take a steptowards answering these questions by demonstrating the following: (a) On atest-bed with a variety of Boolean function classes, we find that Transformerscan nearly match the optimal learning algorithm for 'simpler' tasks, whiletheir performance deteriorates on more 'complex' tasks. Additionally, we findthat certain attention-free models perform (almost) identically to Transformerson a range of tasks. (b) When provided a teaching sequence, i.e. a set ofexamples that uniquely identifies a function in a class, we show thatTransformers learn more sample-efficiently. Interestingly, our results showthat Transformers can learn to implement two distinct algorithms to solve asingle task, and can adaptively select the more sample-efficient algorithmdepending on the sequence of in-context examples. (c) Lastly, we show thatextant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselineson prediction tasks that are guaranteed to not be in their training set.",,arXiv,"['cs.lg', 'cs.cl']",, demonstrations are all you need advancing offensive content paraphrasing using incontext learning,"['Anirudh Som', 'Karan Sikka', 'Helen Gent', 'Ajay Divakaran', 'Andreas Kathol', 'Dimitra Vergyri']",http://arxiv.org/pdf/2310.10707v1.pdf,2023-10-16,," Paraphrasing of offensive content is a better alternative to content removaland helps improve civility in a communication environment. Supervisedparaphrasers; however, rely heavily on large quantities of labelled data tohelp preserve meaning and intent. They also retain a large portion of theoffensiveness of the original content, which raises questions on their overallusability. In this paper we aim to assist practitioners in developing usableparaphrasers by exploring In-Context Learning (ICL) with large language models(LLMs), i.e., using a limited number of input-label demonstration pairs toguide the model in generating desired outputs for specific queries. Our studyfocuses on key factors such as -- number and order of demonstrations, exclusionof prompt instruction, and reduction in measured toxicity. We performprincipled evaluation on three datasets, including our proposed Context-AwarePolite Paraphrase dataset, comprising of dialogue-style rude utterances, politeparaphrases, and additional dialogue context. We evaluate our approach usingtwo closed source and one open source LLM. Our results reveal that ICL iscomparable to supervised methods in generation quality, while beingqualitatively better by 25% on human evaluation and attaining lower toxicity by76%. Also, ICL-based paraphrasers only show a slight reduction in performanceeven with just 10% training data.",,arXiv,"['cs.cl', 'cs.ai']",, pretraining data mixtures enable narrow model selection capabilities in transformer models,"['Steve Yadlowsky', 'Lyric Doshi', 'Nilesh Tripuraneni']",http://arxiv.org/pdf/2311.00871v1.pdf,2023-11-01,," Transformer models, notably large language models (LLMs), have the remarkableability to perform in-context learning (ICL) -- to perform new tasks whenprompted with unseen input-output examples without any explicit model training.In this work, we study how effectively transformers can bridge between theirpretraining data mixture, comprised of multiple distinct task families, toidentify and learn new tasks in-context which are both inside and outside thepretraining distribution. Building on previous work, we investigate thisquestion in a controlled setting, where we study transformer models trained onsequences of $(x, f(x))$ pairs rather than natural language. Our empiricalresults show transformers demonstrate near-optimal unsupervised model selectioncapabilities, in their ability to first in-context identify different taskfamilies and in-context learn within them when the task families arewell-represented in their pretraining data. However when presented with tasksor functions which are out-of-domain of their pretraining data, we demonstratevarious failure modes of transformers and degradation of their generalizationfor even simple extrapolation tasks. Together our results highlight that theimpressive ICL abilities of high-capacity sequence models may be more closelytied to the coverage of their pretraining data mixtures than inductive biasesthat create fundamental generalization capabilities.",,arXiv,"['cs.lg', 'cs.cl', 'stat.ml']",, large language models are fewshot summarizers multiintent comment generation via incontext learning,"['Mingyang Geng', 'Shangwen Wang', 'Dezun Dong', 'Haotian Wang', 'Ge Li', 'Zhi Jin', 'Xiaoguang Mao', 'Xiangke Liao']",http://arxiv.org/pdf/2304.11384v3.pdf,2023-04-22,," Code comment generation aims at generating natural language descriptions fora code snippet to facilitate developers' program comprehension activities.Despite being studied for a long time, a bottleneck for existing approaches isthat given a code snippet, they can only generate one comment while developersusually need to know information from diverse perspectives such as what is thefunctionality of this code snippet and how to use it. To tackle thislimitation, this study empirically investigates the feasibility of utilizinglarge language models (LLMs) to generate comments that can fulfill developers'diverse intents. Our intuition is based on the facts that (1) the code and itspairwise comment are used during the pre-training process of LLMs to build thesemantic connection between the natural language and programming language, and(2) comments in the real-world projects, which are collected for thepre-training, usually contain different developers' intents. We thus postulatethat the LLMs can already understand the code from different perspectives afterthe pre-training. Indeed, experiments on two large-scale datasets demonstratethe rationale of our insights: by adopting the in-context learning paradigm andgiving adequate prompts to the LLM (e.g., providing it with ten or moreexamples), the LLM can significantly outperform a state-of-the-art supervisedlearning approach on generating comments with multiple intents. Results alsoshow that customized strategies for constructing the prompts andpost-processing strategies for reranking the results can both boost the LLM'sperformances, which shed light on future research directions for using LLMs toachieve comment generation.",,arXiv,['cs.se'],, beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning,"['Mustafa Shukor', 'Alexandre Rame', 'Corentin Dancette', 'Matthieu Cord']",http://arxiv.org/pdf/2310.00647v2.pdf,2023-10-01,," Following the success of Large Language Models (LLMs), Large MultimodalModels (LMMs), such as the Flamingo model and its subsequent competitors, havestarted to emerge as natural steps towards generalist agents. However,interacting with recent LMMs reveals major limitations that are hardly capturedby the current evaluation benchmarks. Indeed, task performances (e.g., VQAaccuracy) alone do not provide enough clues to understand their realcapabilities, limitations, and to which extent such models are aligned to humanexpectations. To refine our understanding of those flaws, we deviate from thecurrent evaluation paradigm, and (1) evaluate 10 recent open-source LMMs from3B up to 80B parameter scale, on 5 different axes; hallucinations, abstention,compositionality, explainability and instruction following. Our evaluation onthese axes reveals major flaws in LMMs. While the current go-to solution toalign these models is based on training, such as instruction tuning or RLHF, werather (2) explore the training-free in-context learning (ICL) as a solution,and study how it affects these limitations. Based on our ICL study, (3) we pushICL further and propose new multimodal ICL variants such as; Multitask-ICL,Chain-of-Hindsight-ICL, and Self-Correcting-ICL. Our findings are as follows.(1) Despite their success, LMMs have flaws that remain unsolved with scalingalone. (2) The effect of ICL on LMMs flaws is nuanced; despite itseffectiveness for improved explainability, answer abstention, ICL only slightlyimproves instruction following, does not improve compositional abilities, andactually even amplifies hallucinations. (3) The proposed ICL variants arepromising as post-hoc approaches to efficiently tackle some of those flaws. Thecode is available here: https://github.com/mshukor/EvALign-ICL.",,arXiv,"['cs.cv', 'cs.mm']",, the inductive bias of incontext learning rethinking pretraining example design,"['Yoav Levine', 'Noam Wies', 'Daniel Jannai', 'Dan Navon', 'Yedid Hoshen', 'Amnon Shashua']",http://arxiv.org/pdf/2110.04541v3.pdf,2021-10-09,," Pretraining Neural Language Models (NLMs) over a large corpus involveschunking the text into training examples, which are contiguous text segments ofsizes processable by the neural architecture. We highlight a bias introduced bythis common practice: we prove that the pretrained NLM can model much strongerdependencies between text segments that appeared in the same training example,than it can between text segments that appeared in different training examples.This intuitive result has a twofold role. First, it formalizes the motivationbehind a broad line of recent successful NLM training heuristics, proposed forthe pretraining and fine-tuning stages, which do not necessarily appear relatedat first glance. Second, our result clearly indicates further improvements tobe made in NLM pretraining for the benefit of Natural Language Understandingtasks. As an example, we propose ""kNN-Pretraining"": we show that includingsemantically related non-neighboring sentences in the same pretraining exampleyields improved sentence representations and open domain question answeringabilities. This theoretically motivated degree of freedom for pretrainingexample design indicates new training schemes for self-improvingrepresentations.",,arXiv,"['cs.cl', 'cs.lg']",, instruction induction from few examples to natural language task descriptions,"['Or Honovich', 'Uri Shaham', 'Samuel R. Bowman', 'Omer Levy']",http://arxiv.org/pdf/2205.10782v1.pdf,2022-05-22,," Large language models are able to perform a task by conditioning on a fewinput-output demonstrations - a paradigm known as in-context learning. We showthat language models can explicitly infer an underlying task from a fewdemonstrations by prompting them to generate a natural language instructionthat fits the examples. To explore this ability, we introduce the instructioninduction challenge, compile a dataset consisting of 24 tasks, and define anovel evaluation metric based on executing the generated instruction. Wediscover that, to a large extent, the ability to generate instructions doesindeed emerge when using a model that is both large enough and aligned tofollow instructions; InstructGPT achieves 65.7% of human performance in ourexecution-based metric, while the original GPT-3 model reaches only 9.8% ofhuman performance. This surprising result suggests that instruction inductionmight be a viable learning paradigm in and of itself, where instead of fittinga set of latent continuous parameters to the data, one searches for the bestdescription in the natural language hypothesis space.",,arXiv,['cs.cl'],, large language models are few(1)shot table reasoners,['Wenhu Chen'],http://arxiv.org/pdf/2210.06710v2.pdf,2022-10-13,," Recent literature has shown that large language models (LLMs) are generallyexcellent few-shot reasoners to solve text reasoning tasks. However, thecapability of LLMs on table reasoning tasks is yet to be explored. In thispaper, we aim at understanding how well LLMs can perform table-related taskswith few-shot in-context learning. Specifically, we evaluated LLMs on populartable QA and fact verification datasets like WikiTableQuestion, FetaQA,TabFact, and FEVEROUS and found that LLMs are competent at complex reasoningover table structures, though these models are not pre-trained on any tablecorpus. When combined with `chain of thoughts' prompting, LLMs can achieve verystrong performance with only a 1-shot demonstration, even on par with some SoTAmodels. We show that LLMs are even more competent at generating comprehensivelong-form answers on FetaQA than tuned T5-large. We further manually studiedthe reasoning chains elicited from LLMs and found that these reasoning chainsare highly consistent with the underlying semantic form. We believe that LLMscan serve as a simple yet generic baseline for future research. The code anddata are released in https://github.com/wenhuchen/TableCoT.",,arXiv,['cs.cl'],, selfprompting large language models for zeroshot opendomain qa,"['Junlong Li', 'Zhuosheng Zhang', 'Hai Zhao']",http://arxiv.org/pdf/2212.08635v2.pdf,2022-12-16,," Open-Domain Question Answering (ODQA) aims at answering factoid questionswithout explicitly providing specific background documents. In a zero-shotsetting, this task is more challenging since no data is available to traincustomized models like Retriever-Readers. Recently, Large Language Models(LLMs) like GPT-3 have shown their power in zero-shot ODQA with directprompting methods, but these methods are still far from releasing the fullpowerfulness of LLMs only in an implicitly invoking way. In this paper, wepropose a Self-Prompting framework to explicitly utilize the massive knowledgestored in the parameters of LLMs and their strong instruction understandingabilities. Concretely, we prompt LLMs step by step to generate multiple pseudoQA pairs with background passages and explanations from scratch and then usethose generated elements for in-context learning. Experimental results show ourmethod surpasses previous SOTA methods significantly on three widely-used ODQAdatasets, and even achieves comparable performance with some Retriever-Readermodels fine-tuned on full training data.",,arXiv,"['cs.cl', 'cs.ai']",, ontologically faithful generation of nonplayer character dialogues,"['Nathaniel Weir', 'Ryan Thomas', ""Randolph D'Amore"", 'Kellie Hill', 'Benjamin Van Durme', 'Harsh Jhamtani']",http://arxiv.org/pdf/2212.10618v2.pdf,2022-12-20,," We introduce a language generation task grounded in a popular video gameenvironment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration)requires models to produce trees of dialogue between video game characters thataccurately reflect quest and entity specifications stated in natural language.KNUDGE is constructed from side quest dialogues drawn directly from game dataof Obsidian Entertainment's The Outer Worlds, leading to real-worldcomplexities in generation: (1) dialogues are branching trees as opposed tolinear chains of utterances; (2) utterances must remain faithful to the gamelore -- character personas, backstories, and entity relationships; and (3) adialogue must accurately reveal new quest details to the human player. Wereport results for a set of neural generation models using supervised andin-context learning techniques; we find competent performance but room forfuture work addressing the challenges of creating realistic, game-qualitydialogues.",,arXiv,['cs.cl'],, batch prompting efficient inference with large language model apis,"['Zhoujun Cheng', 'Jungo Kasai', 'Tao Yu']",http://arxiv.org/pdf/2301.08721v2.pdf,2023-01-19,," Performing inference on large volumes of samples with large language models(LLMs) can be computationally and financially costly in industry and real-worlduse. We propose batch prompting, a simple yet effective prompting approach thatenables the LLM to run inference in batches, instead of one sample at a time.Our method reduces both token and time costs while retaining downstreamperformance. We theoretically demonstrate that under a few-shot in-contextlearning setting, the inference costs decrease almost inverse linearly with thenumber of samples in each batch. We extensively validate the effectiveness ofbatch prompting on ten datasets across commonsense QA, arithmetic reasoning,and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch)reduces the LLM (Codex) inference token and time costs while achieving betteror comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5and GPT-4, we show the benefits of batch prompting also hold. Further analysisshows that the number of samples in each batch and the complexity of tasksaffect its performance. Moreover, batch prompting can be applied acrossdifferent reasoning methods using LLMs. Our code can be found at the sitehttps://github.com/xlang-ai/batch-prompting.",,arXiv,"['cs.cl', 'cs.ai']",, finding support examples for incontext learning,"['Xiaonan Li', 'Xipeng Qiu']",http://arxiv.org/pdf/2302.13539v3.pdf,2023-02-27,," Additionally, the strong dependency among in-context examples makes it anNP-hard combinatorial optimization problem and enumerating all permutations isinfeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle thischallenge in two stages: First we filter the dataset to obtain informativein-context examples individually. Specifically, we propose a novel metric,InfoScore, to evaluate the example's in-context informativeness based on thelanguage model's feedback, and further propose a progressive filtering processto filter out uninformative examples. Then we propose diversity-guided examplesearch which iteratively refines and evaluates the selected examplepermutations, to find examples that fully depict the task. The experimentalresults show that LENS significantly outperforms a wide range of baselines.",,arXiv,['cs.cl'],, selfplanning code generation with large language models,"['Xue Jiang', 'Yihong Dong', 'Lecheng Wang', 'Zheng Fang', 'Qiwei Shang', 'Ge Li', 'Zhi Jin', 'Wenpin Jiao']",http://arxiv.org/pdf/2303.06689v2.pdf,2023-03-12,," Although large language models have demonstrated impressive ability in codegeneration, they are still struggling to address the complicated intentprovided by humans. It is widely acknowledged that humans typically employplanning to decompose complex problems and schedule the solution steps prior toimplementation. Thus we introduce planning into code generation to help themodel understand complex intent and reduce the difficulty of problem solving.This paper proposes a self-planning code generation method with large languagemodel, which consists of two phases, namely planning phase and implementationphase. Specifically, in the planning phase, the language model plans out thesolution steps from the intent combined with in-context learning. Then itenters the implementation phase, where the model generates code step by step,guided by the solution steps. The effectiveness of self-planning codegeneration has been rigorously evaluated on multiple code generation datasetsand the results have demonstrated a marked superiority over naive directgeneration approaches with language model. The improvement in performance issubstantial, highlighting the significance of self-planning in code generationtasks.",,arXiv,['cs.se'],, gpt is becoming a turing machine here are some ways to program it,"['Ana Jojic', 'Zhen Wang', 'Nebojsa Jojic']",http://arxiv.org/pdf/2303.14310v1.pdf,2023-03-25,," We demonstrate that, through appropriate prompting, GPT-3 family of modelscan be triggered to perform iterative behaviours necessary to execute (ratherthan just write or recall) programs that involve loops, including severalpopular algorithms found in computer science curricula or software developerinterviews. We trigger execution and description of Iterations by RegimentingSelf-Attention (IRSA) in one (or a combination) of three ways: 1) Using strongrepetitive structure in an example of an execution path of a target program forone particular input, 2) Prompting with fragments of execution paths, and 3)Explicitly forbidding (skipping) self-attention to parts of the generated text.On a dynamic program execution, IRSA leads to larger accuracy gains thanreplacing the model with the much more powerful GPT-4. IRSA has promisingapplications in education, as the prompts and responses resemble studentassignments in data structures and algorithms classes. Our findings holdimplications for evaluating LLMs, which typically target the in-contextlearning: We show that prompts that may not even cover one full task examplecan trigger algorithmic behaviour, allowing solving problems previously thoughtof as hard for LLMs, such as logical puzzles. Consequently, prompt design playsan even more critical role in LLM performance than previously recognized.",,arXiv,['cs.cl'],, is chatgpt a highly fluent grammatical error correction system a comprehensive evaluation,"['Tao Fang', 'Shu Yang', 'Kaixin Lan', 'Derek F. Wong', 'Jinpeng Hu', 'Lidia S. Chao', 'Yue Zhang']",http://arxiv.org/pdf/2304.01746v1.pdf,2023-04-04,," ChatGPT, a large-scale language model based on the advanced GPT-3.5architecture, has shown remarkable potential in various Natural LanguageProcessing (NLP) tasks. However, there is currently a dearth of comprehensivestudy exploring its potential in the area of Grammatical Error Correction(GEC). To showcase its capabilities in GEC, we design zero-shotchain-of-thought (CoT) and few-shot CoT settings using in-context learning forChatGPT. Our evaluation involves assessing ChatGPT's performance on fiveofficial test sets in three different languages, along with threedocument-level GEC test sets in English. Our experimental results and humanevaluations demonstrate that ChatGPT has excellent error detection capabilitiesand can freely correct errors to make the corrected sentences very fluent,possibly due to its over-correction tendencies and not adhering to theprinciple of minimal edits. Additionally, its performance in non-English andlow-resource settings highlights its potential in multilingual GEC tasks.However, further analysis of various types of errors at the document-level hasshown that ChatGPT cannot effectively correct agreement, coreference, tenseerrors across sentences, and cross-sentence boundary errors.",,arXiv,['cs.cl'],, a latent space theory for emergent abilities in large language models,['Hui Jiang'],http://arxiv.org/pdf/2304.09960v3.pdf,2023-04-19,," Languages are not created randomly but rather to communicate information.There is a strong association between languages and their underlying meanings,resulting in a sparse joint distribution that is heavily peaked according totheir correlations. Moreover, these peak values happen to match with themarginal distribution of languages due to the sparsity. With the advent of LLMstrained on big data and large models, we can now precisely assess the marginaldistribution of languages, providing a convenient means of exploring the sparsestructures in the joint distribution for effective inferences. In this paper,we categorize languages as either unambiguous or {\epsilon}-ambiguous andpresent quantitative results to demonstrate that the emergent abilities ofLLMs, such as language understanding, in-context learning, chain-of-thoughtprompting, and effective instruction fine-tuning, can all be attributed toBayesian inference on the sparse joint distribution of languages.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, "stance detection with supervised, zeroshot, and fewshot applications",['Michael Burnham'],http://arxiv.org/pdf/2305.01723v1.pdf,2023-05-02,," Stance detection is the identification of an author's beliefs about a subjectfrom a document. Researchers widely rely on sentiment analysis to accomplishthis. However, recent research has show that sentiment analysis is only looselycorrelated with stance, if at all. This paper advances methods in text analysisby precisely defining the task of stance detection, providing a generalizedframework for the task, and then presenting three distinct approaches forperforming stance detection: supervised classification, zero-shotclassification with NLI classifiers, and in-context learning. In doing so, Idemonstrate how zero-shot and few-shot language classifiers can replace humanlabelers for a variety of tasks and discuss how their application andlimitations differ from supervised classifiers. Finally, I demonstrate anapplication of zero-shot stance detection by replicating Block Jr et al.(2022).",,arXiv,['cs.cl'],, wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models,"['John Giorgi', 'Augustin Toma', 'Ronald Xie', 'Sondra S. Chen', 'Kevin R. An', 'Grace X. Zheng', 'Bo Wang']",http://arxiv.org/pdf/2305.02220v2.pdf,2023-05-03,," This paper describes our submission to the MEDIQA-Chat 2023 shared task forautomatic clinical note generation from doctor-patient conversations. We reportresults for two approaches: the first fine-tunes a pre-trained language model(PLM) on the shared task data, and the second uses few-shot in-context learning(ICL) with a large language model (LLM). Both achieve high performance asmeasured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second andfirst, respectively, of all submissions to the shared task. Expert humanscrutiny indicates that notes generated via the ICL-based approach with GPT-4are preferred about as often as human-written notes, making it a promising pathtoward automated note generation from doctor-patient conversations.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, how good are commercial large language models on african languages,"['Jessica Ojo', 'Kelechi Ogueji']",http://arxiv.org/pdf/2305.06530v1.pdf,2023-05-11,," Recent advancements in Natural Language Processing (NLP) has led to theproliferation of large pretrained language models. These models have been shownto yield good performance, using in-context learning, even on unseen tasks andlanguages. They have also been exposed as commercial APIs as a form oflanguage-model-as-a-service, with great adoption. However, their performance onAfrican languages is largely unknown. We present a preliminary analysis ofcommercial large language models on two tasks (machine translation and textclassification) across eight African languages, spanning different languagefamilies and geographical areas. Our results suggest that commercial languagemodels produce below-par performance on African languages. We also find thatthey perform better on text classification than machine translation. Ingeneral, our findings present a call-to-action to ensure African languages arewell represented in commercial large language models, given their growingpopularity.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, chainofdictionary prompting elicits translation in large language models,"['Hongyuan Lu', 'Haoyang Huang', 'Dongdong Zhang', 'Haoran Yang', 'Wai Lam', 'Furu Wei']",http://arxiv.org/pdf/2305.06575v3.pdf,2023-05-11,," Large language models (LLMs) have shown surprisingly good performance inmultilingual neural machine translation (MNMT) even when trained withoutparallel data. Yet, despite the fact that the amount of training data isgigantic, they still struggle with translating rare words, particularly forlow-resource languages. Even worse, it is usually unrealistic to retrieverelevant demonstrations for in-context learning with low-resource languages onLLMs, which restricts the practical use of LLMs for translation -- how shouldwe mitigate this problem? To this end, we present a novel method, CoD, whichaugments LLMs with prior knowledge with the chains of multilingual dictionariesfor a subset of input words to elicit translation abilities for LLMs. Extensiveexperiments indicate that augmenting ChatGPT with CoD elicits large gains by upto 13x chrF++ points for MNMT (3.08 to 42.63 for English to Serbian written inCyrillic script) on FLORES-200 full devtest set. We further demonstrate theimportance of chaining the multilingual dictionaries, as well as thesuperiority of CoD to few-shot demonstration for low-resource languages.",,arXiv,['cs.cl'],, autotrial prompting language models for clinical trial design,"['Zifeng Wang', 'Cao Xiao', 'Jimeng Sun']",http://arxiv.org/pdf/2305.11366v2.pdf,2023-05-19,," Clinical trials are critical for drug development. Constructing theappropriate eligibility criteria (i.e., the inclusion/exclusion criteria forpatient recruitment) is essential for the trial's success. Proper design ofclinical trial protocols should consider similar precedent trials and theireligibility criteria to ensure sufficient patient coverage. In this paper, wepresent a method named AutoTrial to aid the design of clinical eligibilitycriteria using language models. It allows (1) controllable generation underinstructions via a hybrid of discrete and neural prompting, (2) scalableknowledge incorporation via in-context learning, and (3) explicit reasoningchains to provide rationales for understanding the outputs. Experiments on over70K clinical trials verify that AutoTrial generates high-quality criteria textsthat are fluent and coherent and with high accuracy in capturing the relevantclinical concepts to the target trial. It is noteworthy that our method, with amuch smaller parameter size, gains around 60% winning rate against the GPT-3.5baselines via human evaluations.",,arXiv,['cs.cl'],, "how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings","['Shuaichen Chang', 'Eric Fosler-Lussier']",http://arxiv.org/pdf/2305.11853v3.pdf,2023-05-19,," Large language models (LLMs) with in-context learning have demonstratedremarkable capability in the text-to-SQL task. Previous research has promptedLLMs with various demonstration-retrieval strategies and intermediate reasoningsteps to enhance the performance of LLMs. However, those works often employvaried strategies when constructing the prompt text for text-to-SQL inputs,such as databases and demonstration examples. This leads to a lack ofcomparability in both the prompt constructions and their primary contributions.Furthermore, selecting an effective prompt construction has emerged as apersistent problem for future research. To address this limitation, wecomprehensively investigate the impact of prompt constructions across varioussettings and provide insights into prompt constructions for future text-to-SQLstudies.",,arXiv,['cs.cl'],, factchecking complex claims with programguided reasoning,"['Liangming Pan', 'Xiaobao Wu', 'Xinyuan Lu', 'Anh Tuan Luu', 'William Yang Wang', 'Min-Yen Kan', 'Preslav Nakov']",http://arxiv.org/pdf/2305.12744v1.pdf,2023-05-22,," Fact-checking real-world claims often requires collecting multiple pieces ofevidence and applying complex multi-step reasoning. In this paper, we presentProgram-Guided Fact-Checking (ProgramFC), a novel fact-checking model thatdecomposes complex claims into simpler sub-tasks that can be solved using ashared library of specialized functions. We first leverage the in-contextlearning ability of large language models to generate reasoning programs toguide the verification process. Afterward, we execute the program by delegatingeach sub-task to the corresponding sub-task handler. This process makes ourmodel both explanatory and data-efficient, providing clear explanations of itsreasoning process and requiring minimal training data. We evaluate ProgramFC ontwo challenging fact-checking datasets and show that it outperforms sevenfact-checking baselines across different settings of evidence availability,with explicit output programs that benefit human debugging. Our codes and dataare publicly available at https://github.com/mbzuai-nlp/ProgramFC.",,arXiv,"['cs.cl', 'cs.ai']",, mailex email event and argument extraction,"['Saurabh Srivastava', 'Gaurav Singh', 'Shou Matsumoto', 'Ali Raz', 'Paulo Costa', 'Joshua Poore', 'Ziyu Yao']",http://arxiv.org/pdf/2305.13469v2.pdf,2023-05-22,," In this work, we present the first dataset, MailEx, for performing eventextraction from conversational email threads. To this end, we first proposed anew taxonomy covering 10 event types and 76 arguments in the email domain. Ourfinal dataset includes 1.5K email threads and ~4K emails, which are annotatedwith totally ~8K event instances. To understand the task challenges, weconducted a series of experiments comparing three types of approaches, i.e.,fine-tuned sequence labeling, fine-tuned generative extraction, and few-shotin-context learning. Our results showed that the task of email event extractionis far from being addressed, due to challenges lying in, e.g., extractingnon-continuous, shared trigger spans, extracting non-named entity arguments,and modeling the email conversational history. Our work thus suggests morefuture investigations in this domain-specific event extraction task.",,arXiv,"['cs.cl', 'cs.ai']",, can chatgpt detect intent evaluating large language models for spoken language understanding,"['Mutian He', 'Philip N. Garner']",http://arxiv.org/pdf/2305.13512v2.pdf,2023-05-22,," Recently, large pretrained language models have demonstrated strong languageunderstanding capabilities. This is particularly reflected in their zero-shotand in-context learning abilities on downstream tasks through prompting. Toassess their impact on spoken language understanding (SLU), we evaluate severalsuch models like ChatGPT and OPT of different sizes on multiple benchmarks. Weverify the emergent ability unique to the largest models as they can reachintent classification accuracy close to that of supervised models with zero orfew shots on various languages given oracle transcripts. By contrast, theresults for smaller models fitting a single GPU fall far behind. We note thatthe error cases often arise from the annotation scheme of the dataset;responses from ChatGPT are still reasonable. We show, however, that the modelis worse at slot filling, and its performance is sensitive to ASR errors,suggesting serious challenges for the application of those textual models onSLU.",,arXiv,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",, logicllm exploring selfsupervised logicenhanced training for large language models,"['Fangkai Jiao', 'Zhiyang Teng', 'Shafiq Joty', 'Bosheng Ding', 'Aixin Sun', 'Zhengyuan Liu', 'Nancy F. Chen']",http://arxiv.org/pdf/2305.13718v2.pdf,2023-05-23,," Existing efforts to improve logical reasoning ability of language models havepredominantly relied on supervised fine-tuning, hindering generalization to newdomains and/or tasks. The development of Large Langauge Models (LLMs) hasdemonstrated the capacity of compressing abundant knowledge into a singleproxy, enabling them to tackle multiple tasks effectively. Our preliminaryexperiments, nevertheless, show that LLMs do not show capability on logicalreasoning. The performance of LLMs on logical reasoning benchmarks is farbehind the existing state-of-the-art baselines. In this paper, we make thefirst attempt to investigate the feasibility of incorporating logical knowledgethrough self-supervised post-training, and activating it via in-contextlearning, which we termed as LogicLLM. Specifically, we devise anauto-regressive objective variant of MERIt and integrate it with two LLMseries, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to13 billion. The results on two challenging logical reasoning benchmarksdemonstrate the effectiveness of LogicLLM. Besides, we conduct extensiveablation studies to analyze the key factors in designing logic-oriented proxytasks.",,arXiv,['cs.cl'],, make a choice! knowledge base question answering with incontext learning,"['Chuanyuan Tan', 'Yuehe Chen', 'Wenbiao Shao', 'Wenliang Chen']",http://arxiv.org/pdf/2305.13972v1.pdf,2023-05-23,," Question answering over knowledge bases (KBQA) aims to answer factoidquestions with a given knowledge base (KB). Due to the large scale of KB,annotated data is impossible to cover all fact schemas in KB, which poses achallenge to the generalization ability of methods that require a sufficientamount of annotated data. Recently, LLMs have shown strong few-shot performancein many NLP tasks. We expect LLM can help existing methods improve theirgeneralization ability, especially in low-resource situations. In this paper,we present McL-KBQA, a framework that incorporates the few-shot ability of LLMinto the KBQA method via ICL-based multiple choice and then improves theeffectiveness of the QA tasks. Experimental results on two KBQA datasetsdemonstrate the competitive performance of McL-KBQA with strong improvements ingeneralization. We expect to explore a new way to QA tasks from KBQA inconjunction with LLM, how to generate answers normatively and correctly withstrong generalization.",,arXiv,['cs.cl'],, ctqscorer combining multiple features for incontext example selection for machine translation,"['Aswanth Kumar', 'Ratish Puduppully', 'Raj Dabre', 'Anoop Kunchukuttan']",http://arxiv.org/pdf/2305.14105v2.pdf,2023-05-23,," Large language models have demonstrated the capability to perform on machinetranslation when the input is prompted with a few examples (in-contextlearning). Translation quality depends on various features of the selectedexamples, such as their quality and relevance, but previous work haspredominantly focused on individual features in isolation. In this paper, wepropose a general framework for combining different features influencingexample selection. We learn a regression model, CTQ Scorer (ContextualTranslation Quality), that selects examples based on multiple features in orderto maximize the translation quality. On multiple language pairs and languagemodels, we show that CTQ Scorer helps significantly outperform random selectionas well as strong single-factor baselines reported in the literature. We alsosee an improvement of over 2.5 COMET points on average with respect to a strongBM25 retrieval-based baseline.",,arXiv,"['cs.cl', 'cs.ai']",, empowering llmbased machine translation with cultural awareness,"['Binwei Yao', 'Ming Jiang', 'Diyi Yang', 'Junjie Hu']",http://arxiv.org/pdf/2305.14328v1.pdf,2023-05-23,," Traditional neural machine translation (NMT) systems often fail to translatesentences that contain culturally specific information. Most previous NMTmethods have incorporated external cultural knowledge during training, whichrequires fine-tuning on low-frequency items specific to the culture. Recentin-context learning utilizes lightweight prompts to guide large language models(LLMs) to perform machine translation, however, whether such an approach worksin terms of injecting culture awareness into machine translation remainsunclear. To this end, we introduce a new data curation pipeline to construct aculturally relevant parallel corpus, enriched with annotations ofcultural-specific entities. Additionally, we design simple but effectiveprompting strategies to assist this LLM-based translation. Extensiveexperiments show that our approaches can largely help incorporate culturalknowledge into LLM-based machine translation, outperforming traditional NMTsystems in translating cultural-specific sentences.",,arXiv,['cs.cl'],, selfchecker plugandplay modules for factchecking with large language models,"['Miaoran Li', 'Baolin Peng', 'Zhu Zhang']",http://arxiv.org/pdf/2305.14623v1.pdf,2023-05-24,," Fact-checking is an essential task in NLP that is commonly utilized forvalidating the factual accuracy of claims. Prior work has mainly focused onfine-tuning pre-trained languages models on specific datasets, which can becomputationally intensive and time-consuming. With the rapid development oflarge language models (LLMs), such as ChatGPT and GPT-3, researchers are nowexploring their in-context learning capabilities for a wide range of tasks. Inthis paper, we aim to assess the capacity of LLMs for fact-checking byintroducing Self-Checker, a framework comprising a set of plug-and-play modulesthat facilitate fact-checking by purely prompting LLMs in an almost zero-shotsetting. This framework provides a fast and efficient way to constructfact-checking systems in low-resource environments. Empirical resultsdemonstrate the potential of Self-Checker in utilizing LLMs for fact-checking.However, there is still significant room for improvement compared to SOTAfine-tuned models, which suggests that LLM adoption could be a promisingapproach for future fact-checking research.",,arXiv,['cs.cl'],, expertprompting instructing large language models to be distinguished experts,"['Benfeng Xu', 'An Yang', 'Junyang Lin', 'Quan Wang', 'Chang Zhou', 'Yongdong Zhang', 'Zhendong Mao']",http://arxiv.org/pdf/2305.14688v1.pdf,2023-05-24,," The answering quality of an aligned large language model (LLM) can bedrastically improved if treated with proper crafting of prompts. In this paper,we propose ExpertPrompting to elicit the potential of LLMs to answer asdistinguished experts. We first utilize In-Context Learning to automaticallysynthesize detailed and customized descriptions of the expert identity for eachspecific instruction, and then ask LLMs to provide answer conditioned on suchagent background. Based on this augmented prompting strategy, we produce a newset of instruction-following data using GPT-3.5, and train a competitiveopen-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluationto show that 1) the expert data is of significantly higher quality than vanillaanswers, and 2) ExpertLLaMA outperforms existing open-source opponents andachieves 96\% of the original ChatGPT's capability. All data and theExpertLLaMA model will be made publicly available at\url{https://github.com/OFA-Sys/ExpertLLaMA}.",,arXiv,"['cs.cl', 'cs.ai']",, getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning,"['Tianqing Fang', 'Zhaowei Wang', 'Wenxuan Zhou', 'Hongming Zhang', 'Yangqiu Song', 'Muhao Chen']",http://arxiv.org/pdf/2305.14970v1.pdf,2023-05-24,," Event temporal reasoning aims at identifying the temporal relations betweentwo or more events. However, knowledge conflicts arise when there is a mismatchbetween the actual temporal relations of events in the context and the priorknowledge or biases learned by the model. We first systematically definedistinct kinds of bias in event temporal reasoning, which include eventrelation prior bias, tense bias, narrative bias, and dependency bias, asindicators to study knowledge conflicts. To mitigate such event-relatedknowledge conflict, we introduce a Counterfactual Data Augmentation basedmethod that can be applied to both Pre-trained Language Models (PLMs) and LargeLanguage Models (LLMs) either as additional training data or demonstrations forIn-Context Learning. Experiments suggest the importance of mitigating knowledgeconflicts in event temporal reasoning tasks for reducing hallucination andhighlight the potential of counterfactual data augmentation for improving modelperformance.",,arXiv,"['cs.cl', 'cs.ai']",, boosting crosslingual transferability in multilingual models via incontext learning,"['Sunkyoung Kim', 'Dayeon Ki', 'Yireun Kim', 'Jinsik Lee']",http://arxiv.org/pdf/2305.15233v1.pdf,2023-05-24,," Existing cross-lingual transfer (CLT) prompting methods are only concernedwith monolingual demonstration examples in the source language. In this paper,we propose In-CLT, a novel cross-lingual transfer prompting method thatleverages both source and target languages to construct the demonstrationexamples. We conduct comprehensive evaluations on multilingual benchmarks,focusing on question answering tasks. Experiment results show that In-CLTprompt not only improves multilingual models' cross-lingual transferability,but also demonstrates remarkable unseen language generalization ability. In-CLTprompting, in particular, improves model performance by 10 to 20\% points onaverage when compared to prior cross-lingual transfer approaches. We alsoobserve the surprising performance gain on the other multilingual benchmarks,especially in reasoning tasks. Furthermore, we investigate the relationshipbetween lexical similarity and pre-training corpora in terms of thecross-lingual transfer gap.",,arXiv,"['cs.cl', 'cs.ai']",, a mechanism for solving relational tasks in transformer language models,"['Jack Merullo', 'Carsten Eickhoff', 'Ellie Pavlick']",http://arxiv.org/pdf/2305.16130v2.pdf,2023-05-25,," A primary criticism towards language models (LMs) is their inscrutability.This paper presents evidence that, despite their size and complexity, LMssometimes exploit a simple computational mechanism to solve one-to-onerelational tasks (e.g., capital_of(Poland)=Warsaw). We investigate a range oflanguage model sizes (from 124M parameters to 176B parameters) in an in-contextlearning setting, and find that for a variety of tasks (involving capitalcities, upper-casing, and past-tensing) a key part of the mechanism reduces toa simple linear update typically applied by the feedforward (FFN) networks.These updates also tend to promote the output of the relation in acontent-independent way (e.g., encoding Poland:Warsaw::China:Beijing),revealing a predictable pattern that these models take in solving these tasks.We further show that this mechanism is specific to tasks that require retrievalfrom pretraining memory, rather than retrieval from local context. Our resultscontribute to a growing body of work on the mechanistic interpretability ofLLMs, and offer reason to be optimistic that, despite the massive andnon-linear nature of the models, the strategies they ultimately use to solvetasks can sometimes reduce to familiar and even intuitive algorithms.",,arXiv,"['cs.cl', 'cs.lg']",, augmenting large language model translators via translation memories,"['Yongyu Mu', 'Abudurexiti Reheman', 'Zhiquan Cao', 'Yuchun Fan', 'Bei Li', 'Yinqiao Li', 'Tong Xiao', 'Chunliang Zhang', 'Jingbo Zhu']",http://arxiv.org/pdf/2305.17367v1.pdf,2023-05-27,," Using translation memories (TMs) as prompts is a promising approach toin-context learning of machine translation models. In this work, we take a steptowards prompting large language models (LLMs) with TMs and making them bettertranslators. We find that the ability of LLMs to ``understand'' prompts isindeed helpful for making better use of TMs. Experiments show that the resultsof a pre-trained LLM translator can be greatly improved by using high-qualityTM-based prompts. These results are even comparable to those of thestate-of-the-art NMT systems which have access to large-scale in-domainbilingual data and are well tuned on the downstream tasks.",,arXiv,['cs.cl'],, towards explainable conversational recommender systems,"['Shuyu Guo', 'Shuo Zhang', 'Weiwei Sun', 'Pengjie Ren', 'Zhumin Chen', 'Zhaochun Ren']",http://arxiv.org/pdf/2305.18363v1.pdf,2023-05-27,," Explanations in conventional recommender systems have demonstrated benefitsin helping the user understand the rationality of the recommendations andimproving the system's efficiency, transparency, and trustworthiness. In theconversational environment, multiple contextualized explanations need to begenerated, which poses further challenges for explanations. To better measureexplainability in conversational recommender systems (CRS), we propose tenevaluation perspectives based on concepts from conventional recommender systemstogether with the characteristics of CRS. We assess five existing CRS benchmarkdatasets using these metrics and observe the necessity of improving theexplanation quality of CRS. To achieve this, we conduct manual and automaticapproaches to extend these dialogues and construct a new CRS dataset, namelyExplainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues withover 2,000 high-quality rewritten explanations. We compare two baselineapproaches to perform explanation generation based on E-ReDial. Experimentalresults suggest that models trained on E-ReDial can significantly improveexplainability while introducing knowledge into the models can further improvethe performance. GPT-3 in the in-context learning setting can generate morerealistic and diverse movie descriptions. In contrast, T5 training on E-ReDialcan better generate clear reasons for recommendations based on userpreferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial.",,arXiv,"['cs.ir', 'cs.ai']",, grammar prompting for domainspecific language generation with large language models,"['Bailin Wang', 'Zi Wang', 'Xuezhi Wang', 'Yuan Cao', 'Rif A. Saurous', 'Yoon Kim']",http://arxiv.org/pdf/2305.19234v3.pdf,2023-05-30,," Large language models (LLMs) can learn to perform a wide range of naturallanguage tasks from just a handful of in-context examples. However, forgenerating strings from highly structured languages (e.g., semantic parsing tocomplex domain-specific languages), it is challenging for the LLM to generalizefrom just a few exemplars. We propose \emph{grammar prompting}, a simpleapproach to enable LLMs to use external knowledge and domain-specificconstraints, expressed through a grammar in Backus--Naur Form (BNF), duringin-context learning. Grammar prompting augments each demonstration example witha specialized grammar that is minimally sufficient for generating theparticular output example, where the specialized grammar is a subset of thefull DSL grammar. For inference, the LLM first predicts a BNF grammar given atest input, and then generates the output according to the rules of thegrammar. Experiments demonstrate that grammar prompting can enable LLMs toperform competitively on a diverse set of DSL generation tasks, includingsemantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, andSMILES-based molecule generation.",,arXiv,"['cs.cl', 'cs.ai']",, prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models,"['Fengzhu Zeng', 'Wei Gao']",http://arxiv.org/pdf/2306.02569v1.pdf,2023-06-05,," Few-shot or zero-shot fact verification only relies on a few or no labeledtraining examples. In this paper, we propose a novel method called ProToCo, to\underline{Pro}mpt pre-trained language models (PLMs) \underline{To} be\underline{Co}nsistent, for improving the factuality assessment capability ofPLMs in the few-shot and zero-shot settings. Given a claim-evidence pair,ProToCo generates multiple variants of the claim with different relations andframes a simple consistency mechanism as constraints for making compatiblepredictions across these variants. We update PLMs by using parameter-efficientfine-tuning (PEFT), leading to more accurate predictions in few-shot andzero-shot fact verification tasks. Our experiments on three public verificationdatasets show that ProToCo significantly outperforms state-of-the-art few-shotfact verification baselines. With a small number of unlabeled instances,ProToCo also outperforms the strong zero-shot learner T0 on zero-shotverification. Compared to large PLMs using in-context learning (ICL) method,ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model inboth few- and zero-shot settings.",,arXiv,['cs.cl'],, modular visual question answering via code generation,"['Sanjay Subramanian', 'Medhini Narasimhan', 'Kushal Khangaonkar', 'Kevin Yang', 'Arsha Nagrani', 'Cordelia Schmid', 'Andy Zeng', 'Trevor Darrell', 'Dan Klein']",http://arxiv.org/pdf/2306.05392v1.pdf,2023-06-08,," We present a framework that formulates visual question answering as modularcode generation. In contrast to prior work on modular approaches to VQA, ourapproach requires no additional training and relies on pre-trained languagemodels (LMs), visual models pre-trained on image-caption pairs, and fifty VQAexamples used for in-context learning. The generated Python programs invoke andcompose the outputs of the visual models using arithmetic and conditionallogic. Our approach improves accuracy on the COVR dataset by at least 3% and onthe GQA dataset by roughly 2% compared to the few-shot baseline that does notemploy code generation.",,arXiv,['cs.cl'],, disasterresponsegpt large language models for accelerated plan of action development in disaster response scenarios,"['Vinicius G. Goecks', 'Nicholas R. Waytowich']",http://arxiv.org/pdf/2306.17271v1.pdf,2023-06-29,," The development of plans of action in disaster response scenarios is atime-consuming process. Large Language Models (LLMs) offer a powerful solutionto expedite this process through in-context learning. This study presentsDisasterResponseGPT, an algorithm that leverages LLMs to generate valid plansof action quickly by incorporating disaster response and planning guidelines inthe initial prompt. In DisasterResponseGPT, users input the scenariodescription and receive a plan of action as output. The proposed methodgenerates multiple plans within seconds, which can be further refined followingthe user's feedback. Preliminary results indicate that the plans of actiondeveloped by DisasterResponseGPT are comparable to human-generated ones whileoffering greater ease of modification in real-time. This approach has thepotential to revolutionize disaster response operations by enabling rapidupdates and adjustments during the plan's execution.",,arXiv,"['cs.lg', 'i.2.7; j.7; k.4.0']",, reasoning before responding integrating commonsensebased causality explanation for empathetic response generation,"['Yahui Fu', 'Koji Inoue', 'Chenhui Chu', 'Tatsuya Kawahara']",http://arxiv.org/pdf/2308.00085v2.pdf,2023-07-28,," Recent approaches to empathetic response generation try to incorporatecommonsense knowledge or reasoning about the causes of emotions to betterunderstand the user's experiences and feelings. However, these approachesmainly focus on understanding the causalities of context from the user'sperspective, ignoring the system's perspective. In this paper, we propose acommonsense-based causality explanation approach for diverse empatheticresponse generation that considers both the user's perspective (user's desiresand reactions) and the system's perspective (system's intentions andreactions). We enhance ChatGPT's ability to reason for the system's perspectiveby integrating in-context learning with commonsense knowledge. Then, weintegrate the commonsense-based causality explanation with both ChatGPT and aT5-based model. Experimental evaluations demonstrate that our methodoutperforms other comparable methods on both automatic and human evaluations.",,arXiv,"['cs.cl', 'cs.ai']",, jen1 textguided universal music generation with omnidirectional diffusion models,"['Peike Li', 'Boyu Chen', 'Yao Yao', 'Yikai Wang', 'Allen Wang', 'Alex Wang']",http://arxiv.org/pdf/2308.04729v1.pdf,2023-08-09,," Music generation has attracted growing interest with the advancement of deepgenerative models. However, generating music conditioned on textualdescriptions, known as text-to-music, remains challenging due to the complexityof musical structures and high sampling rate requirements. Despite the task'ssignificance, prevailing generative models exhibit limitations in musicquality, computational efficiency, and generalization. This paper introducesJEN-1, a universal high-fidelity model for text-to-music generation. JEN-1 is adiffusion model incorporating both autoregressive and non-autoregressivetraining. Through in-context learning, JEN-1 performs various generation tasksincluding text-guided music generation, music inpainting, and continuation.Evaluations demonstrate JEN-1's superior performance over state-of-the-artmethods in text-music alignment and music quality while maintainingcomputational efficiency. Our demos are available athttp://futureverse.com/research/jen/demos/jen1",,arXiv,"['cs.sd', 'cs.ai', 'cs.lg', 'cs.mm', 'eess.as']",, algorithm of thoughts enhancing exploration of ideas in large language models,"['Bilgehan Sel', 'Ahmad Al-Tawaha', 'Vanshaj Khattar', 'Ruoxi Jia', 'Ming Jin']",http://arxiv.org/pdf/2308.10379v2.pdf,2023-08-20,," Current literature, aiming to surpass the ""Chain-of-Thought"" approach, oftenresorts to an external modus operandi involving halting, modifying, and thenresuming the generation process to boost Large Language Models' (LLMs)reasoning capacities. This mode escalates the number of query requests, leadingto increased costs, memory, and computational overheads. Addressing this, wepropose the Algorithm of Thoughts -- a novel strategy that propels LLMs throughalgorithmic reasoning pathways, pioneering a new mode of in-context learning.By employing algorithmic examples, we exploit the innate recurrence dynamics ofLLMs, expanding their idea exploration with merely one or a few queries. Ourtechnique outperforms earlier single-query methods and stands on par with arecent multi-query strategy that employs an extensive tree search algorithm.Intriguingly, our results suggest that instructing an LLM using an algorithmcan lead to performance surpassing that of the algorithm itself, hinting atLLM's inherent ability to weave its intuition into optimized searches. We probeinto the underpinnings of our method's efficacy and its nuances in application.",,arXiv,"['cs.cl', 'cs.ai']",, building emotional support chatbots in the era of llms,"['Zhonghua Zheng', 'Lizi Liao', 'Yang Deng', 'Liqiang Nie']",http://arxiv.org/pdf/2308.11584v1.pdf,2023-08-17,," The integration of emotional support into various conversational scenariospresents profound societal benefits, such as social interactions, mental healthcounseling, and customer service. However, there are unsolved challenges thathinder real-world applications in this field, including limited dataavailability and the absence of well-accepted model training paradigms. Thiswork endeavors to navigate these challenges by harnessing the capabilities ofLarge Language Models (LLMs). We introduce an innovative methodology thatsynthesizes human insights with the computational prowess of LLMs to curate anextensive emotional support dialogue dataset. Our approach is initiated with ameticulously designed set of dialogues spanning diverse scenarios as generativeseeds. By utilizing the in-context learning potential of ChatGPT, werecursively generate an ExTensible Emotional Support dialogue dataset, namedExTES. Following this, we deploy advanced tuning techniques on the LLaMA model,examining the impact of diverse training strategies, ultimately yielding an LLMmeticulously optimized for emotional support interactions. An exhaustiveassessment of the resultant model showcases its proficiency in offeringemotional support, marking a pivotal step in the realm of emotional supportbots and paving the way for subsequent research and implementations.",,arXiv,"['cs.cl', 'cs.ai']",, breaking the bank with chatgpt fewshot text classification for finance,"['Lefteris Loukas', 'Ilias Stogiannidis', 'Prodromos Malakasiotis', 'Stavros Vassos']",http://arxiv.org/pdf/2308.14634v1.pdf,2023-08-28,," We propose the use of conversational GPT models for easy and quick few-shottext classification in the financial domain using the Banking77 dataset. Ourapproach involves in-context learning with GPT-3.5 and GPT-4, which minimizesthe technical expertise required and eliminates the need for expensive GPUcomputing while yielding quick and accurate results. Additionally, we fine-tuneother pre-trained, masked language models with SetFit, a recent contrastivelearning technique, to achieve state-of-the-art results both in full-data andfew-shot settings. Our findings show that querying GPT-3.5 and GPT-4 canoutperform fine-tuned, non-generative models even with fewer examples. However,subscription fees associated with these solutions may be considered costly forsmall organizations. Lastly, we find that generative models perform better onthe given task when shown representative samples selected by a human expertrather than when shown random ones. We conclude that a) our proposed methodsoffer a practical solution for few-shot tasks in datasets with limited labelavailability, and b) our state-of-the-art results can inspire future work inthe area.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-fin.cp']",, genderspecific machine translation with large language models,"['Eduardo Sánchez', 'Pierre Andrews', 'Pontus Stenetorp', 'Mikel Artetxe', 'Marta R. Costa-jussà']",http://arxiv.org/pdf/2309.03175v1.pdf,2023-09-06,," Decoder-only Large Language Models (LLMs) have demonstrated potential inmachine translation (MT), albeit with performance slightly lagging behindtraditional encoder-decoder Neural Machine Translation (NMT) systems. However,LLMs offer a unique advantage: the ability to control the properties of theoutput through prompts. In this study, we harness this flexibility to exploreLLaMa's capability to produce gender-specific translations for languages withgrammatical gender. Our results indicate that LLaMa can generategender-specific translations with competitive accuracy and gender biasmitigation when compared to NLLB, a state-of-the-art multilingual NMT system.Furthermore, our experiments reveal that LLaMa's translations are robust,showing significant performance drops when evaluated against opposite-genderreferences in gender-ambiguous datasets but maintaining consistency in lessambiguous contexts. This research provides insights into the potential andchallenges of using LLMs for gender-specific translations and highlights theimportance of in-context learning to elicit new tasks in LLMs.",,arXiv,['cs.cl'],, improving open information extraction with large language models a study on demonstration uncertainty,"['Chen Ling', 'Xujiang Zhao', 'Xuchao Zhang', 'Yanchi Liu', 'Wei Cheng', 'Haoyu Wang', 'Zhengzhang Chen', 'Takao Osaki', 'Katsushi Matsuda', 'Haifeng Chen', 'Liang Zhao']",http://arxiv.org/pdf/2309.03433v1.pdf,2023-09-07,," Open Information Extraction (OIE) task aims at extracting structured factsfrom unstructured text, typically in the form of (subject, relation, object)triples. Despite the potential of large language models (LLMs) like ChatGPT asa general task solver, they lag behind state-of-the-art (supervised) methods inOIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevantcontext from relevant relations and generate structured output due to therestrictions on fine-tuning the model. Second, LLMs generates responsesautoregressively based on probability, which makes the predicted relations lackconfidence. In this paper, we assess the capabilities of LLMs in improving theOIE task. Particularly, we propose various in-context learning strategies toenhance LLM's instruction-following ability and a demonstration uncertaintyquantification module to enhance the confidence of the generated relations. Ourexperiments on three OIE benchmark datasets show that our approach holds itsown against established supervised methods, both quantitatively andqualitatively.",,arXiv,['cs.cl'],, epa easy prompt augmentation on large language models via multiple sources and multiple targets,"['Hongyuan Lu', 'Wai Lam']",http://arxiv.org/pdf/2309.04725v1.pdf,2023-09-09,," Large language models (LLMs) have shown promising performance on various NLPtasks via task prompting. And their performance can be further improved byappending task demonstrations to the head of the prompt. And usually, a betterperformance can be achieved with more demonstrations. However, asking the usersto write the demonstrations can be cumbersome. As a simple yet cost-effectiveworkaround, this paper proposes a novel method called EPA (\textbf{E}asy\textbf{P}rompt \textbf{A}ugmentation)\footnote{While this paper considersaugmenting prompts via demonstrations, we name it EPA as the name EDA isalready taken by a well-known NLP method \citep{wei-zou-2019-eda}.} thateffectively minimizes user efforts in writing demonstrations while improvingthe model performance at the same time. EPA achieves these goals byautomatically augmenting the demonstrations with multiple sources/targets,where each of them paraphrases each other. This is well motivated as augmentingdata via paraphrasing effectively improves neural language models. EPA thusemploys paraphrasing as an augmentation method for in-context learning.Extensive experiments indicate that EPA effectively improves both NLU and NLGtasks, covering from natural language inference to machine translation intranslating tens of languages.\footnote{Code and data will be released uponpublication.}",,arXiv,['cs.cl'],, converser fewshot conversational dense retrieval with synthetic data generation,"['Chao-Wei Huang', 'Chen-Yu Hsu', 'Tsu-Yuan Hsu', 'Chen-An Li', 'Yun-Nung Chen']",http://arxiv.org/pdf/2309.06748v1.pdf,2023-09-13,," Conversational search provides a natural interface for information retrieval(IR). Recent approaches have demonstrated promising results in applying denseretrieval to conversational IR. However, training dense retrievers requireslarge amounts of in-domain paired data. This hinders the development ofconversational dense retrievers, as abundant in-domain conversations areexpensive to collect. In this paper, we propose CONVERSER, a framework fortraining conversational dense retrievers with at most 6 examples of in-domaindialogues. Specifically, we utilize the in-context learning capability of largelanguage models to generate conversational queries given a passage in theretrieval corpus. Experimental results on conversational retrieval benchmarksOR-QuAC and TREC CAsT 19 show that the proposed CONVERSER achieves comparableperformance to fully-supervised models, demonstrating the effectiveness of ourproposed framework in few-shot conversational dense retrieval. All source codeand generated datasets are available at https://github.com/MiuLab/CONVERSER",,arXiv,"['cs.cl', 'cs.ir']",, fewshot adaptation for parsing contextual utterances with llms,"['Kevin Lin', 'Patrick Xia', 'Hao Fang']",http://arxiv.org/pdf/2309.10168v1.pdf,2023-09-18,," We evaluate the ability of semantic parsers based on large language models(LLMs) to handle contextual utterances. In real-world settings, there typicallyexists only a limited number of annotated contextual utterances due toannotation cost, resulting in an imbalance compared to non-contextualutterances. Therefore, parsers must adapt to contextual utterances with a fewtraining examples. We examine four major paradigms for doing so inconversational semantic parsing i.e., Parse-with-Utterance-History,Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. Tofacilitate such cross-paradigm comparisons, we constructSMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow withadditional annotations. Experiments with in-context learning and fine-tuningsuggest that Rewrite-then-Parse is the most promising paradigm whenholistically considering parsing accuracy, annotation cost, and error types.",,arXiv,['cs.cl'],, toward unified controllable text generation via regular expression instruction,"['Xin Zheng', 'Hongyu Lin', 'Xianpei Han', 'Le Sun']",http://arxiv.org/pdf/2309.10447v2.pdf,2023-09-19,," Controllable text generation is a fundamental aspect of natural languagegeneration, with numerous methods proposed for different constraint types.However, these approaches often require significant architectural or decodingmodifications, making them challenging to apply to additional constraints orresolve different constraint combinations. To address this, our paperintroduces Regular Expression Instruction (REI), which utilizes aninstruction-based mechanism to fully exploit regular expressions' advantages touniformly model diverse constraints. Specifically, our REI supports all popularfine-grained controllable generation constraints, i.e., lexical, positional,and length, as well as their complex combinations, via regular expression-styleinstructions. Our method only requires fine-tuning on medium-scale languagemodels or few-shot, in-context learning on large language models, and requiresno further adjustment when applied to various constraint combinations.Experiments demonstrate that our straightforward approach yields high successrates and adaptability to various constraints while maintaining competitivenessin automatic metrics and outperforming most previous baselines.",,arXiv,"['cs.cl', 'cs.ai']",, languageoriented communication with semantic coding and knowledge distillation for texttoimage generation,"['Hyelin Nam', 'Jihong Park', 'Jinho Choi', 'Mehdi Bennis', 'Seong-Lyun Kim']",http://arxiv.org/pdf/2309.11127v1.pdf,2023-09-20,," By integrating recent advances in large language models (LLMs) and generativemodels into the emerging semantic communication (SC) paradigm, in this articlewe put forward to a novel framework of language-oriented semantic communication(LSC). In LSC, machines communicate using human language messages that can beinterpreted and manipulated via natural language processing (NLP) techniquesfor SC efficiency. To demonstrate LSC's potential, we introduce threeinnovative algorithms: 1) semantic source coding (SSC) which compresses a textprompt into its key head words capturing the prompt's syntactic essence whilemaintaining their appearance order to keep the prompt's context; 2) semanticchannel coding (SCC) that improves robustness against errors by substitutinghead words with their lenghthier synonyms; and 3) semantic knowledgedistillation (SKD) that produces listener-customized prompts via in-contextlearning the listener's language style. In a communication task for progressivetext-to-image generation, the proposed methods achieve higher perceptualsimilarities with fewer transmissions while enhancing robustness in noisycommunication channels.",,arXiv,"['eess.sp', 'cs.ai', 'cs.cl']",, towards effective disambiguation for machine translation with large language models,"['Vivek Iyer', 'Pinzhen Chen', 'Alexandra Birch']",http://arxiv.org/pdf/2309.11668v2.pdf,2023-09-20,," Resolving semantic ambiguity has long been recognised as a central challengein the field of Machine Translation. Recent work on benchmarking translationperformance on ambiguous sentences has exposed the limitations of conventionalNeural Machine Translation (NMT) systems, which fail to handle many such cases.Large language models (LLMs) have emerged as a promising alternative,demonstrating comparable performance to traditional NMT models whileintroducing new paradigms for controlling the target outputs. In this paper, westudy the capabilities of LLMs to translate ""ambiguous sentences"" - i.e. thosecontaining highly polysemous words and/or rare word senses. We also propose twoways to improve their disambiguation capabilities, through a) in-contextlearning and b) fine-tuning on carefully curated ambiguous datasets.Experiments show that our methods can match or outperform state-of-the-artsystems such as DeepL and NLLB in four out of five language directions. Ourresearch provides valuable insights into effectively adapting LLMs to becomebetter disambiguators during Machine Translation. We release our curateddisambiguation corpora and resources athttps://data.statmt.org/ambiguous-europarl.",,arXiv,['cs.cl'],, incontext interference in chatbased large language models,"['Eric Nuertey Coleman', 'Julio Hurtado', 'Vincenzo Lomonaco']",http://arxiv.org/pdf/2309.12727v1.pdf,2023-09-22,," Large language models (LLMs) have had a huge impact on society due to theirimpressive capabilities and vast knowledge of the world. Various applicationsand tools have been created that allow users to interact with these models in ablack-box scenario. However, one limitation of this scenario is that userscannot modify the internal knowledge of the model, and the only way to add ormodify internal knowledge is by explicitly mentioning it to the model duringthe current interaction. This learning process is called in-context training,and it refers to training that is confined to the user's current session orcontext. In-context learning has significant applications, but also haslimitations that are seldom studied. In this paper, we present a study thatshows how the model can suffer from interference between information thatcontinually flows in the context, causing it to forget previously learnedknowledge, which can reduce the model's performance. Along with showing theproblem, we propose an evaluation benchmark based on the bAbI dataset.",,arXiv,"['cs.ai', 'cs.cl']",, affect recognition in conversations using large language models,"['Shutong Feng', 'Guangzhi Sun', 'Nurul Lubis', 'Chao Zhang', 'Milica Gašić']",http://arxiv.org/pdf/2309.12881v1.pdf,2023-09-22,," Affect recognition, encompassing emotions, moods, and feelings, plays apivotal role in human communication. In the realm of conversational artificialintelligence (AI), the ability to discern and respond to human affective cuesis a critical factor for creating engaging and empathetic interactions. Thisstudy delves into the capacity of large language models (LLMs) to recognisehuman affect in conversations, with a focus on both open-domain chit-chatdialogues and task-oriented dialogues. Leveraging three diverse datasets,namely IEMOCAP, EmoWOZ, and DAIC-WOZ, covering a spectrum of dialogues fromcasual conversations to clinical interviews, we evaluated and compared LLMs'performance in affect recognition. Our investigation explores the zero-shot andfew-shot capabilities of LLMs through in-context learning (ICL) as well astheir model capacities through task-specific fine-tuning. Additionally, thisstudy takes into account the potential impact of automatic speech recognition(ASR) errors on LLM predictions. With this work, we aim to shed light on theextent to which LLMs can replicate human-like affect recognition capabilitiesin conversations.",,arXiv,['cs.cl'],, calibrating llmbased evaluator,"['Yuxuan Liu', 'Tianchi Yang', 'Shaohan Huang', 'Zihan Zhang', 'Haizhen Huang', 'Furu Wei', 'Weiwei Deng', 'Feng Sun', 'Qi Zhang']",http://arxiv.org/pdf/2309.13308v1.pdf,2023-09-23,," Recent advancements in large language models (LLMs) on language modeling andemergent capabilities make them a promising reference-free evaluator of naturallanguage generation quality, and a competent alternative to human evaluation.However, hindered by the closed-source or high computational demand to host andtune, there is a lack of practice to further calibrate an off-the-shelfLLM-based evaluator towards better human alignment. In this work, we proposeAutoCalibrate, a multi-stage, gradient-free approach to automatically calibrateand align an LLM-based evaluator toward human preference. Instead of explicitlymodeling human preferences, we first implicitly encompass them within a set ofhuman labels. Then, an initial set of scoring criteria is drafted by thelanguage model itself, leveraging in-context learning on different few-shotexamples. To further calibrate this set of criteria, we select the bestperformers and re-draft them with self-refinement. Our experiments on multipletext quality evaluation datasets illustrate a significant improvement incorrelation with expert evaluation through calibration. Our comprehensivequalitative analysis conveys insightful intuitions and observations on theessence of effective scoring criteria.",,arXiv,['cs.cl'],, mededit model editing for medical question answering with external knowledge bases,"['Yucheng Shi', 'Shaochen Xu', 'Zhengliang Liu', 'Tianming Liu', 'Xiang Li', 'Ninghao Liu']",http://arxiv.org/pdf/2309.16035v1.pdf,2023-09-27,," Large Language Models (LLMs), although powerful in general domains, oftenperform poorly on domain-specific tasks like medical question answering (QA).Moreover, they tend to function as ""black-boxes,"" making it challenging tomodify their behavior. Addressing this, our study delves into model editingutilizing in-context learning, aiming to improve LLM responses without the needfor fine-tuning or retraining. Specifically, we propose a comprehensiveretrieval strategy to extract medical facts from an external knowledge base,and then we incorporate them into the query prompt for the LLM. Focusing onmedical QA using the MedQA-SMILE dataset, we evaluate the impact of differentretrieval models and the number of facts provided to the LLM. Notably, ouredited Vicuna model exhibited an accuracy improvement from 44.46% to 48.54%.This work underscores the potential of model editing to enhance LLMperformance, offering a practical approach to mitigate the challenges ofblack-box LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method,"['Xuan Zhang', 'Wei Gao']",http://arxiv.org/pdf/2310.00305v1.pdf,2023-09-30,," While large pre-trained language models (LLMs) have shown their impressivecapabilities in various NLP tasks, they are still under-explored in themisinformation domain. In this paper, we examine LLMs with in-context learning(ICL) for news claim verification, and find that only with 4-shot demonstrationexamples, the performance of several prompting methods can be comparable withprevious supervised models. To further boost performance, we introduce aHierarchical Step-by-Step (HiSS) prompting method which directs LLMs toseparate a claim into several subclaims and then verify each of them viamultiple questions-answering steps progressively. Experiment results on twopublic misinformation datasets show that HiSS prompting outperformsstate-of-the-art fully-supervised approach and strong few-shot ICL-enabledbaselines.",,arXiv,['cs.cl'],, fool your (vision and) language model with embarrassingly simple permutations,"['Yongshuo Zong', 'Tingyang Yu', 'Bingchen Zhao', 'Ruchika Chavhan', 'Timothy Hospedales']",http://arxiv.org/pdf/2310.01651v1.pdf,2023-10-02,," Large language and vision-language models are rapidly being deployed inpractice thanks to their impressive capabilities in instruction following,in-context learning, and so on. This raises an urgent need to carefully analysetheir robustness so that stakeholders can understand if and when such modelsare trustworthy enough to be relied upon in any given application. In thispaper, we highlight a specific vulnerability in popular models, namelypermutation sensitivity in multiple-choice question answering (MCQA).Specifically, we show empirically that popular models are vulnerable toadversarial permutation in answer sets for multiple-choice prompting, which issurprising as models should ideally be as invariant to prompt permutation ashumans are. These vulnerabilities persist across various model sizes, and existin very recent language and vision-language models. Code is available at\url{https://github.com/ys-zong/FoolyourVLLMs}.",,arXiv,['cs.lg'],, improving automatic vqa evaluation using large language models,"['Oscar Mañas', 'Benno Krojer', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2310.02567v2.pdf,2023-10-04,," 8 years after the visual question answering (VQA) task was proposed, accuracyremains the primary metric for automatic evaluation. VQA Accuracy has beeneffective so far in the IID evaluation setting. However, our community isundergoing a shift towards open-ended generative models and OOD evaluation. Inthis new paradigm, the existing VQA Accuracy metric is overly stringent andunderestimates the performance of VQA systems. Thus, there is a need to developmore robust automatic VQA metrics that serve as a proxy for human judgment. Inthis work, we propose to leverage the in-context learning capabilities ofinstruction-tuned large language models (LLMs) to build a better VQA metric. Weformulate VQA evaluation as an answer-rating task where the LLM is instructedto score the accuracy of a candidate answer given a set of reference answers.We demonstrate the proposed metric better correlates with human judgmentcompared to existing metrics across several VQA models and benchmarks. We hopewide adoption of our metric will contribute to better estimating the researchprogress on the VQA task. We plan to release the evaluation code and collectedhuman judgments.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, guideline learning for incontext information extraction,"['Chaoxu Pang', 'Yixuan Cao', 'Qiang Ding', 'Ping Luo']",http://arxiv.org/pdf/2310.05066v2.pdf,2023-10-08,," Large language models (LLMs) can perform a new task by merely conditioning ontask instructions and a few input-output examples, without optimizing anyparameters. This is called In-Context Learning (ICL). In-context InformationExtraction (IE) has recently garnered attention in the research community.However, the performance of In-context IE generally lags behind thestate-of-the-art supervised expert models. We highlight a key reason for thisshortfall: underspecified task description. The limited-length contextstruggles to thoroughly express the intricate IE task instructions and variousedge cases, leading to misalignment in task comprehension with humans. In thispaper, we propose a Guideline Learning (GL) framework for In-context IE whichreflectively learns and follows guidelines. During the learning phrase, GLautomatically synthesizes a set of guidelines based on a few error cases, andduring inference, GL retrieves helpful guidelines for better ICL. Moreover, wepropose a self-consistency-based active learning method to enhance theefficiency of GL. Experiments on event extraction and relation extraction showthat GL can significantly improve the performance of in-context IE.",,arXiv,"['cs.cl', 'cs.lg']",, harnessing the power of large language models for empathetic response generation empirical investigations and improvements,"['Yushan Qian', 'Wei-Nan Zhang', 'Ting Liu']",http://arxiv.org/pdf/2310.05140v3.pdf,2023-10-08,," Empathetic dialogue is an indispensable part of building harmonious socialrelationships and contributes to the development of a helpful AI. Previousapproaches are mainly based on fine small-scale language models. With theadvent of ChatGPT, the application effect of large language models (LLMs) inthis field has attracted great attention. This work empirically investigatesthe performance of LLMs in generating empathetic responses and proposes threeimprovement methods of semantically similar in-context learning, two-stageinteractive generation, and combination with the knowledge base. Extensiveexperiments show that LLMs can significantly benefit from our proposed methodsand is able to achieve state-of-the-art performance in both automatic and humanevaluations. Additionally, we explore the possibility of GPT-4 simulating humanevaluators.",,arXiv,"['cs.cl', 'cs.ai']",, llmlingua compressing prompts for accelerated inference of large language models,"['Huiqiang Jiang', 'Qianhui Wu', 'Chin-Yew Lin', 'Yuqing Yang', 'Lili Qiu']",http://arxiv.org/pdf/2310.05736v2.pdf,2023-10-09,," Large language models (LLMs) have been applied in various applications due totheir astonishing capabilities. With advancements in technologies such aschain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fedto LLMs are becoming increasingly lengthy, even exceeding tens of thousands oftokens. To accelerate model inference and reduce cost, this paper presentsLLMLingua, a coarse-to-fine prompt compression method that involves a budgetcontroller to maintain semantic integrity under high compression ratios, atoken-level iterative compression algorithm to better model the interdependencebetween compressed contents, and an instruction tuning based method fordistribution alignment between language models. We conduct experiments andanalysis over four datasets from different scenarios, i.e., GSM8K, BBH,ShareGPT, and Arxiv-March23; showing that the proposed approach yieldsstate-of-the-art performance and allows for up to 20x compression with littleperformance loss. Our code is available at https://aka.ms/LLMLingua.",,arXiv,"['cs.cl', 'cs.lg']",, selective demonstrations for crossdomain texttosql,"['Shuaichen Chang', 'Eric Fosler-Lussier']",http://arxiv.org/pdf/2310.06302v1.pdf,2023-10-10,," Large language models (LLMs) with in-context learning have demonstratedimpressive generalization capabilities in the cross-domain text-to-SQL task,without the use of in-domain annotations. However, incorporating in-domaindemonstration examples has been found to greatly enhance LLMs' performance. Inthis paper, we delve into the key factors within in-domain examples thatcontribute to the improvement and explore whether we can harness these benefitswithout relying on in-domain annotations. Based on our findings, we propose ademonstration selection framework ODIS which utilizes both out-of-domainexamples and synthetically generated in-domain examples to constructdemonstrations. By retrieving demonstrations from hybrid sources, ODISleverages the advantages of both, showcasing its effectiveness compared tobaseline methods that rely on a single data source. Furthermore, ODISoutperforms state-of-the-art approaches on two cross-domain text-to-SQLdatasets, with improvements of 1.1 and 11.8 points in execution accuracy,respectively.",,arXiv,['cs.cl'],, jailbreak and guard aligned language models with only few incontext demonstrations,"['Zeming Wei', 'Yifei Wang', 'Yisen Wang']",http://arxiv.org/pdf/2310.06387v1.pdf,2023-10-10,," Large Language Models (LLMs) have shown remarkable success in various tasks,but concerns about their safety and the potential for generating maliciouscontent have emerged. In this paper, we explore the power of In-ContextLearning (ICL) in manipulating the alignment ability of LLMs. We find that byproviding just few in-context demonstrations without fine-tuning, LLMs can bemanipulated to increase or decrease the probability of jailbreaking, i.e.answering malicious prompts. Based on these observations, we propose In-ContextAttack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guardingaligned language model purposes. ICA crafts malicious contexts to guide modelsin generating harmful outputs, while ICD enhances model robustness bydemonstrations of rejecting to answer harmful prompts. Our experiments show theeffectiveness of ICA and ICD in increasing or reducing the success rate ofadversarial jailbreaking attacks. Overall, we shed light on the potential ofICL to influence LLM behavior and provide a new perspective for enhancing thesafety and alignment of LLMs.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cr']",, a search for prompts generating structured answers from contracts,"['Adam Roegiest', 'Radha Chitta', 'Jonathan Donnelly', 'Maya Lash', 'Alexandra Vtyurina', 'François Longtin']",http://arxiv.org/pdf/2310.10141v1.pdf,2023-10-16,," In many legal processes being able to action on the concrete implication of alegal question can be valuable to automating human review or signalling certainconditions (e.g., alerts around automatic renewal). To support such tasks, wepresent a form of legal question answering that seeks to return one (or more)fixed answers for a question about a contract clause. After showing thatunstructured generative question answering can have questionable outcomes forsuch a task, we discuss our exploration methodology for legal questionanswering prompts using OpenAI's \textit{GPT-3.5-Turbo} and provide a summaryof insights. Using insights gleaned from our qualitative experiences, we compare ourproposed template prompts against a common semantic matching approach and findthat our prompt templates are far more accurate despite being less reliable inthe exact response return. With some additional tweaks to prompts and the useof in-context learning, we are able to further improve the performance of ourproposed strategy while maximizing the reliability of responses as best we can.",,arXiv,['cs.cv'],, large language models meet openworld intent discovery and recognition an evaluation of chatgpt,"['Xiaoshuai Song', 'Keqing He', 'Pei Wang', 'Guanting Dong', 'Yutao Mou', 'Jingang Wang', 'Yunsen Xian', 'Xunliang Cai', 'Weiran Xu']",http://arxiv.org/pdf/2310.10176v1.pdf,2023-10-16,," The tasks of out-of-domain (OOD) intent discovery and generalized intentdiscovery (GID) aim to extend a closed intent classifier to open-world intentsets, which is crucial to task-oriented dialogue (TOD) systems. Previousmethods address them by fine-tuning discriminative models. Recently, althoughsome studies have been exploring the application of large language models(LLMs) represented by ChatGPT to various downstream tasks, it is still unclearfor the ability of ChatGPT to discover and incrementally extent OOD intents. Inthis paper, we comprehensively evaluate ChatGPT on OOD intent discovery andGID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPTexhibits consistent advantages under zero-shot settings, but is still at adisadvantage compared to fine-tuned models. More deeply, through a series ofanalytical experiments, we summarize and discuss the challenges faced by LLMsincluding clustering, domain-specific understanding, and cross-domainin-context learning scenarios. Finally, we provide empirical guidance forfuture directions to address these challenges.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, moconvq unified physicsbased motion control via scalable discrete representations,"['Heyuan Yao', 'Zhenhua Song', 'Yuyang Zhou', 'Tenglong Ao', 'Baoquan Chen', 'Libin Liu']",http://arxiv.org/pdf/2310.10198v3.pdf,2023-10-16,," In this work, we present MoConVQ, a novel unified framework for physics-basedmotion control leveraging scalable discrete representations. Building uponvector quantized variational autoencoders (VQ-VAE) and model-basedreinforcement learning, our approach effectively learns motion embeddings froma large, unstructured dataset spanning tens of hours of motion examples. Theresultant motion representation not only captures diverse motion skills butalso offers a robust and intuitive interface for various applications. Wedemonstrate the versatility of MoConVQ through several applications: universaltracking control from various motion sources, interactive character controlwith latent motion representations using supervised learning, physics-basedmotion generation from natural language descriptions using the GPT framework,and, most interestingly, seamless integration with large language models (LLMs)with in-context learning to tackle complex and abstract tasks.",,arXiv,"['cs.cv', 'cs.gr']",, semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking,"['Yuxiang Wu', 'Guanting Dong', 'Weiran Xu']",http://arxiv.org/pdf/2310.10520v3.pdf,2023-10-16,," Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiringand annotating task-oriented dialogues, which can be time-consuming and costly.However, DST extends beyond simple slot-filling and requires effective updatingstrategies for tracking dialogue state as conversations progress. In thispaper, we propose ParsingDST, a new In-Context Learning (ICL) method, tointroduce additional intricate updating strategies in zero-shot DST. Ourapproach reformulates the DST task by leveraging powerful Large Language Models(LLMs) and translating the original dialogue text to JSON through semanticparsing as an intermediate state. We also design a novel framework thatincludes more modules to ensure the effectiveness of updating strategies in thetext-to-JSON process. Experimental results demonstrate that our approachoutperforms existing zero-shot DST methods on MultiWOZ, exhibiting significantimprovements in Joint Goal Accuracy (JGA) and slot accuracy compared toexisting ICL methods. Our code has been released.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, mastering the task of open information extraction with large language models and consistent reasoning environment,"['Ji Qi', 'Kaixuan Ji', 'Xiaozhi Wang', 'Jifan Yu', 'Kaisheng Zeng', 'Lei Hou', 'Juanzi Li', 'Bin Xu']",http://arxiv.org/pdf/2310.10590v1.pdf,2023-10-16,," Open Information Extraction (OIE) aims to extract objective structuredknowledge from natural texts, which has attracted growing attention to builddedicated models with human experience. As the large language models (LLMs)have exhibited remarkable in-context learning capabilities, a question arisesas to whether the task of OIE can be effectively tackled with this paradigm? Inthis paper, we explore solving the OIE problem by constructing an appropriatereasoning environment for LLMs. Specifically, we first propose a method toeffectively estimate the discrepancy of syntactic distribution between a LLMand test samples, which can serve as correlation evidence for preparingpositive demonstrations. Upon the evidence, we introduce a simple yet effectivemechanism to establish the reasoning environment for LLMs on specific tasks.Without bells and whistles, experimental results on the standard CaRB benchmarkdemonstrate that our $6$-shot approach outperforms state-of-the-art supervisedmethod, achieving an $55.3$ $F_1$ score. Further experiments on TACRED andACE05 show that our method can naturally generalize to other informationextraction tasks, resulting in improvements of $5.7$ and $6.8$ $F_1$ scores,respectively.",,arXiv,['cs.cl'],, exploring automatic evaluation methods based on a decoderbased llm for text generation,"['Tomohito Kasahara', 'Daisuke Kawahara']",http://arxiv.org/pdf/2310.11026v1.pdf,2023-10-17,," Automatic evaluation of text generation is essential for improving theaccuracy of generation tasks. In light of the current trend towardsincreasingly larger decoder-based language models, we investigate automaticevaluation methods based on such models for text generation. This papercompares various methods, including tuning with encoder-based models and largelanguage models under equal conditions, on two different tasks, machinetranslation evaluation and semantic textual similarity, in two languages,Japanese and English. Experimental results show that compared to the tunedencoder-based models, the tuned decoder-based models perform poorly. Theanalysis of the causes for this suggests that the decoder-based models focus onsurface word sequences and do not capture meaning. It is also revealed thatin-context learning of very large decoder-based models such as ChatGPT makes itdifficult to identify fine-grained semantic differences.",,arXiv,['cs.cl'],, learning from red teaming gender bias provocation and mitigation in large language models,"['Hsuan Su', 'Cheng-Chu Cheng', 'Hua Farn', 'Shachi H Kumar', 'Saurav Sahay', 'Shang-Tse Chen', 'Hung-yi Lee']",http://arxiv.org/pdf/2310.11079v1.pdf,2023-10-17,," Recently, researchers have made considerable improvements in dialogue systemswith the progress of large language models (LLMs) such as ChatGPT and GPT-4.These LLM-based chatbots encode the potential biases while retainingdisparities that can harm humans during interactions. The traditional biasesinvestigation methods often rely on human-written test cases. However, thesetest cases are usually expensive and limited. In this work, we propose afirst-of-its-kind method that automatically generates test cases to detectLLMs' potential gender bias. We apply our method to three well-known LLMs andfind that the generated test cases effectively identify the presence of biases.To address the biases identified, we propose a mitigation strategy that usesthe generated test cases as demonstrations for in-context learning tocircumvent the need for parameter fine-tuning. The experimental results showthat LLMs generate fairer responses with the proposed approach.",,arXiv,"['cs.cl', 'cs.ai']",, evaluating llms for privilegeescalation scenarios,"['Andreas Happe', 'Aaron Kaplan', 'Jürgen Cito']",http://arxiv.org/pdf/2310.11409v2.pdf,2023-10-17,," Penetration testing, an essential component of cybersecurity, allowsorganizations to proactively identify and remediate vulnerabilities in theirsystems, thus bolstering their defense mechanisms against potentialcyberattacks. One recent advancement in the realm of penetration testing is theutilization of Language Models (LLMs). We explore the intersection of LLMs andpenetration testing to gain insight into their capabilities and challenges inthe context of privilige escalation. We create an automated Linuxprivilege-escalation benchmark utilizing local virtual machines. We introducean LLM-guided privilege-escalation tool designed for evaluating different LLMsand prompt strategies against our benchmark. We analyze the impact of differentprompt designs, the benefits of in-context learning, and the advantages ofoffering high-level guidance to LLMs. We discuss challenging areas for LLMs,including maintaining focus during testing, coping with errors, and finallycomparing them with both stochastic parrots as well as with human hackers.",,arXiv,"['cs.cr', 'cs.ai']",, measuring pointwise $mathcal{v}$usable information incontextly,"['Sheng Lu', 'Shan Chen', 'Yingya Li', 'Danielle Bitterman', 'Guergana Savova', 'Iryna Gurevych']",http://arxiv.org/pdf/2310.12300v2.pdf,2023-10-18,," In-context learning (ICL) is a new learning paradigm that has gainedpopularity along with the development of large language models. In this work,we adapt a recently proposed hardness metric, pointwise $\mathcal{V}$-usableinformation (PVI), to an in-context version (in-context PVI). Compared to theoriginal PVI, in-context PVI is more efficient in that it requires only a fewexemplars and does not require fine-tuning. We conducted a comprehensiveempirical analysis to evaluate the reliability of in-context PVI. Our findingsindicate that in-context PVI estimates exhibit similar characteristics to theoriginal PVI. Specific to the in-context setting, we show that in-context PVIestimates remain consistent across different exemplar selections and numbers ofshots. The variance of in-context PVI estimates across different exemplarselections is insignificant, which suggests that in-context PVI are stable.Furthermore, we demonstrate how in-context PVI can be employed to identifychallenging instances. Our work highlights the potential of in-context PVI andprovides new insights into the capabilities of ICL.",,arXiv,['cs.cl'],, attack prompt generation for red teaming and defending large language models,"['Boyi Deng', 'Wenjie Wang', 'Fuli Feng', 'Yang Deng', 'Qifan Wang', 'Xiangnan He']",http://arxiv.org/pdf/2310.12505v1.pdf,2023-10-19,," Large language models (LLMs) are susceptible to red teaming attacks, whichcan induce LLMs to generate harmful content. Previous research constructsattack prompts via manual or automatic methods, which have their ownlimitations on construction cost and quality. To address these issues, wepropose an integrated approach that combines manual and automatic methods toeconomically generate high-quality attack prompts. Specifically, consideringthe impressive capabilities of newly emerged LLMs, we propose an attackframework to instruct LLMs to mimic human-generated prompts through in-contextlearning. Furthermore, we propose a defense framework that fine-tunes victimLLMs through iterative interactions with the attack framework to enhance theirsafety against red teaming attacks. Extensive experiments on different LLMsvalidate the effectiveness of our proposed attack and defense frameworks.Additionally, we release a series of attack prompts datasets named SAP withvarying sizes, facilitating the safety evaluation and enhancement of more LLMs.Our code and dataset is available on https://github.com/Aatrox103/SAP .",,arXiv,"['cs.cl', 'cs.cr', 'cs.lg']",, are structural concepts universal in transformer language models towards interpretable crosslingual generalization,"['Ningyu Xu', 'Qi Zhang', 'Jingting Ye', 'Menghan Zhang', 'Xuanjing Huang']",http://arxiv.org/pdf/2310.12794v2.pdf,2023-10-19,," Large language models (LLMs) have exhibited considerable cross-lingualgeneralization abilities, whereby they implicitly transfer knowledge acrosslanguages. However, the transfer is not equally successful for all languages,especially for low-resource ones, which poses an ongoing challenge. It isunclear whether we have reached the limits of implicit cross-lingualgeneralization and if explicit knowledge transfer is viable. In this paper, weinvestigate the potential for explicitly aligning conceptual correspondencebetween languages to enhance cross-lingual generalization. Using the syntacticaspect of language as a testbed, our analyses of 43 languages reveal a highdegree of alignability among the spaces of structural concepts within eachlanguage for both encoder-only and decoder-only LLMs. We then propose ameta-learning-based method to learn to align conceptual spaces of differentlanguages, which facilitates zero-shot and few-shot generalization in conceptclassification and also offers insights into the cross-lingual in-contextlearning phenomenon. Experiments on syntactic analysis tasks show that ourapproach achieves competitive results with state-of-the-art methods and narrowsthe performance gap between languages, particularly benefiting those withlimited resources.",,arXiv,['cs.cl'],, mind the instructions a holistic evaluation of consistency and interactions in promptbased learning,"['Lucas Weber', 'Elia Bruni', 'Dieuwke Hupkes']",http://arxiv.org/pdf/2310.13486v1.pdf,2023-10-20,," Finding the best way of adapting pre-trained language models to a task is abig challenge in current NLP. Just like the previous generation of task-tunedmodels (TT), models that are adapted to tasks via in-context-learning (ICL) arerobust in some setups but not in others. Here, we present a detailed analysisof which design choices cause instabilities and inconsistencies in LLMpredictions. First, we show how spurious correlations between inputdistributions and labels -- a known issue in TT models -- form only a minorproblem for prompted models. Then, we engage in a systematic, holisticevaluation of different factors that have been found to influence predictionsin a prompting setup. We test all possible combinations of a range of factorson both vanilla and instruction-tuned (IT) LLMs of different scale andstatistically analyse the results to show which factors are the mostinfluential, interactive or stable. Our results show which factors can be usedwithout precautions and which should be avoided or handled with care in mostsettings.",,arXiv,"['cs.cl', 'cs.ai']",, a simple baseline for knowledgebased visual question answering,"['Alexandros Xenos', 'Themos Stafylakis', 'Ioannis Patras', 'Georgios Tzimiropoulos']",http://arxiv.org/pdf/2310.13570v2.pdf,2023-10-20,," This paper is on the problem of Knowledge-Based Visual Question Answering(KB-VQA). Recent works have emphasized the significance of incorporating bothexplicit (through external databases) and implicit (through LLMs) knowledge toanswer questions requiring external knowledge effectively. A common limitationof such approaches is that they consist of relatively complicated pipelines andoften heavily rely on accessing GPT-3 API. Our main contribution in this paperis to propose a much simpler and readily reproducible pipeline which, in anutshell, is based on efficient in-context learning by prompting LLaMA (1 and2) using question-informative captions as contextual information. Contrary torecent approaches, our method is training-free, does not require access toexternal databases or APIs, and yet achieves state-of-the-art accuracy on theOK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies tounderstand important aspects of our method. Our code is publicly available athttps://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA",,arXiv,['cs.cv'],, an incontext schema understanding method for knowledge base question answering,"['Yantao Liu', 'Zixuan Li', 'Xiaolong Jin', 'Yucan Guo', 'Long Bai', 'Saiping Guan', 'Jiafeng Guo', 'Xueqi Cheng']",http://arxiv.org/pdf/2310.14174v2.pdf,2023-10-22,," The Knowledge Base Question Answering (KBQA) task aims to answer naturallanguage questions based on a given knowledge base. Recently, Large LanguageModels (LLMs) have shown strong capabilities in language understanding and canbe used to solve this task. In doing so, a major challenge for LLMs is toovercome the immensity and heterogeneity of knowledge base schemas.Existingmethods bypass this challenge by initially employing LLMs to generate drafts oflogic forms without schema-specific details.Then, an extra module is used toinject schema information to these drafts.In contrast, in this paper, wepropose a simple In-Context Schema Understanding (ICSU) method that enablesLLMs to directly understand schemas by leveraging in-context learning.Specifically, ICSU provides schema information to LLMs using schema-relatedannotated examples. We investigate three example retrieval strategies based onraw questions, anonymized questions, and generated SPARQL queries. Experimentalresults show that ICSU demonstrates competitive performance compared tobaseline methods on both the KQA Pro and WebQSP datasets.",,arXiv,['cs.cl'],, from chaos to clarity claim normalization to empower factchecking,"['Megha Sundriyal', 'Tanmoy Chakraborty', 'Preslav Nakov']",http://arxiv.org/pdf/2310.14338v3.pdf,2023-10-22,," With the rise of social media, users are exposed to many misleading claims.However, the pervasive noise inherent in these posts presents a challenge inidentifying precise and prominent claims that require verification. Extractingthe important claims from such posts is arduous and time-consuming, yet it isan underexplored problem. Here, we aim to bridge this gap. We introduce a noveltask, Claim Normalization (aka ClaimNorm), which aims to decompose complex andnoisy social media posts into more straightforward and understandable forms,termed normalized claims. We propose CACN, a pioneering approach that leverageschain-of-thought and claim check-worthiness estimation, mimicking humanreasoning processes, to comprehend intricate claims. Moreover, we capitalize onthe in-context learning capabilities of large language models to provideguidance and to improve claim normalization. To evaluate the effectiveness ofour proposed model, we meticulously compile a comprehensive real-world dataset,CLAN, comprising more than 6k instances of social media posts alongside theirrespective normalized claims. Our experiments demonstrate that CACN outperformsseveral baselines across various evaluation measures. Finally, our rigorouserror analysis validates CACN's capabilities and pitfalls.",,arXiv,"['cs.cl', 'cs.ai']",, retrievalaugmented chainofthought in semistructured domains,"['Vaibhav Mavi', 'Abulhair Saparov', 'Chen Zhao']",http://arxiv.org/pdf/2310.14435v1.pdf,2023-10-22,," Applying existing question answering (QA) systems to specialized domains likelaw and finance presents challenges that necessitate domain expertise. Althoughlarge language models (LLMs) have shown impressive language comprehension andin-context learning capabilities, their inability to handle very longinputs/contexts is well known. Tasks specific to these domains need significantbackground knowledge, leading to contexts that can often exceed the maximumlength that existing LLMs can process. This study explores leveraging thesemi-structured nature of legal and financial data to efficiently retrieverelevant context, enabling the use of LLMs for domain-specialized QA. Theresulting system outperforms contemporary models and also provides usefulexplanations for the answers, encouraging the integration of LLMs into legaland financial NLP systems for future research.",,arXiv,"['cs.cl', 'cs.ai']",, statistical depth for ranking and characterizing transformerbased text embeddings,"['Parker Seegmiller', 'Sarah Masud Preum']",http://arxiv.org/pdf/2310.15010v1.pdf,2023-10-23,," The popularity of transformer-based text embeddings calls for betterstatistical tools for measuring distributions of such embeddings. One such toolwould be a method for ranking texts within a corpus by centrality, i.e.assigning each text a number signifying how representative that text is of thecorpus as a whole. However, an intrinsic center-outward ordering ofhigh-dimensional text representations is not trivial. A statistical depth is afunction for ranking k-dimensional objects by measuring centrality with respectto some observed k-dimensional distribution. We adopt a statistical depth tomeasure distributions of transformer-based text embeddings, transformer-basedtext embedding (TTE) depth, and introduce the practical use of this depth forboth modeling and distributional inference in NLP pipelines. We first defineTTE depth and an associated rank sum test for determining whether two corporadiffer significantly in embedding space. We then use TTE depth for the task ofin-context learning prompt selection, showing that this approach reliablyimproves performance over statistical baseline approaches across six textclassification tasks. Finally, we use TTE depth and the associated rank sumtest to characterize the distributions of synthesized and human-generatedcorpora, showing that five recent synthetic data augmentation processes cause ameasurable distributional shift away from associated human-generated text.",,arXiv,['cs.cl'],, the bla benchmark investigating basic language abilities of pretrained multimodal models,"['Xinyi Chen', 'Raquel Fernández', 'Sandro Pezzelle']",http://arxiv.org/pdf/2310.15061v1.pdf,2023-10-23,," Despite the impressive performance achieved by pre-trainedlanguage-and-vision models in downstream tasks, it remains an open questionwhether this reflects a proper understanding of image-text interaction. In thiswork, we explore to what extent they handle basic linguistic constructions --active-passive voice, coordination, and relative clauses -- that even preschoolchildren can typically master. We present BLA, a novel, automaticallyconstructed benchmark to evaluate multimodal models on these Basic LanguageAbilities. We show that different types of Transformer-based systems, such asCLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting,in line with previous findings. Our experiments, in particular, show that mostof the tested models only marginally benefit when fine-tuned or prompted withconstruction-specific samples. Yet, the generative BLIP2 shows promisingtrends, especially in an in-context learning setting. This opens the door tousing BLA not only as an evaluation benchmark but also to improve models' basiclanguage abilities.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv']",, llmintheloop leveraging large language model for thematic analysis,"['Shih-Chieh Dai', 'Aiping Xiong', 'Lun-Wei Ku']",http://arxiv.org/pdf/2310.15100v1.pdf,2023-10-23,," Thematic analysis (TA) has been widely used for analyzing qualitative data inmany disciplines and fields. To ensure reliable analysis, the same piece ofdata is typically assigned to at least two human coders. Moreover, to producemeaningful and useful analysis, human coders develop and deepen their datainterpretation and coding over multiple iterations, making TA labor-intensiveand time-consuming. Recently the emerging field of large language models (LLMs)research has shown that LLMs have the potential replicate human-like behaviorin various tasks: in particular, LLMs outperform crowd workers ontext-annotation tasks, suggesting an opportunity to leverage LLMs on TA. Wepropose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conductTA with in-context learning (ICL). This framework provides the prompt to framediscussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA.We demonstrate the utility of this framework using survey datasets on theaspects of the music listening experience and the usage of a password manager.Results of the two case studies show that the proposed framework yields similarcoding quality to that of human coders but reduces TA's labor and time demands.",,arXiv,['cs.cl'],, ui layout generation with llms guided by ui grammar,"['Yuwen Lu', 'Ziang Tong', 'Qinyi Zhao', 'Chengzhi Zhang', 'Toby Jia-Jun Li']",http://arxiv.org/pdf/2310.15455v1.pdf,2023-10-24,," The recent advances in Large Language Models (LLMs) have stimulated interestamong researchers and industry professionals, particularly in their applicationto tasks concerning mobile user interfaces (UIs). This position paperinvestigates the use of LLMs for UI layout generation. Central to ourexploration is the introduction of UI grammar -- a novel approach we proposedto represent the hierarchical structure inherent in UI screens. The aim of thisapproach is to guide the generative capacities of LLMs more effectively andimprove the explainability and controllability of the process. Initialexperiments conducted with GPT-4 showed the promising capability of LLMs toproduce high-quality user interfaces via in-context learning. Furthermore, ourpreliminary comparative study suggested the potential of the grammar-basedapproach in improving the quality of generative results in specific aspects.",,arXiv,"['cs.hc', 'cs.ai']",, poe process of elimination for multiple choice reasoning,"['Chenkai Ma', 'Xinya Du']",http://arxiv.org/pdf/2310.15575v1.pdf,2023-10-24,," Language models (LMs) are capable of conducting in-context learning formultiple choice reasoning tasks, but the options in these tasks are treatedequally. As humans often first eliminate wrong options before picking the finalcorrect answer, we argue a similar two-step strategy can make LMs better atthese tasks. To this end, we present the Process of Elimination (POE), atwo-step scoring method. In the first step, POE scores each option, andeliminates seemingly wrong options. In the second step, POE masks these wrongoptions, and makes the final prediction from the remaining options. Zero-shotexperiments on 8 reasoning tasks illustrate the effectiveness of POE, and afollowing analysis finds our method to be especially performant on logicalreasoning tasks. We further analyze the effect of masks, and show that POEapplies to few-shot settings and large language models (LLMs) like ChatGPT.",,arXiv,['cs.cl'],, webwise web interface control and sequential exploration with large language models,"['Heyi Tao', 'Sethuraman T V', 'Michal Shlapentokh-Rothman', 'Derek Hoiem']",http://arxiv.org/pdf/2310.16042v2.pdf,2023-10-24,," The paper investigates using a Large Language Model (LLM) to automaticallyperform web software tasks using click, scroll, and text input operations.Previous approaches, such as reinforcement learning (RL) or imitation learning,are inefficient to train and task-specific. Our method uses filtered DocumentObject Model (DOM) elements as observations and performs tasks step-by-step,sequentially generating small programs based on the current observations. Weuse in-context learning, either benefiting from a single manually providedexample, or an automatically generated example based on a successful zero-shottrial. We evaluate the proposed method on the MiniWob++ benchmark. With onlyone in-context example, our WebWISE method achieves similar or betterperformance than other methods that require many demonstrations or trials.",,arXiv,"['cs.cl', 'cs.ai']",, from heuristic to analytic cognitively motivated strategies for coherent physical commonsense reasoning,"['Zheyuan Zhang', 'Shane Storks', 'Fengyuan Hu', 'Sungryull Sohn', 'Moontae Lee', 'Honglak Lee', 'Joyce Chai']",http://arxiv.org/pdf/2310.18364v1.pdf,2023-10-24,," Pre-trained language models (PLMs) have shown impressive performance invarious language tasks. However, they are prone to spurious correlations, andoften generate illusory information. In real-world applications, PLMs shouldjustify decisions with formalized, coherent reasoning chains, but thischallenge remains under-explored. Cognitive psychology theorizes that humansare capable of utilizing fast and intuitive heuristic thinking to makedecisions based on past experience, then rationalizing the decisions throughslower and deliberative analytic reasoning. We incorporate these interlinkeddual processes in fine-tuning and in-context learning with PLMs, applying themto two language understanding tasks that require coherent physical commonsensereasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR)strategies drastically improve the coherence of rationalizations for modeldecisions, yielding state-of-the-art results on Tiered Reasoning for IntuitivePhysics (TRIP). We also find that this improved coherence is a direct result ofmore faithful attention to relevant language context in each step of reasoning.Our findings suggest that human-like reasoning strategies can effectivelyimprove the coherence and reliability of PLM reasoning.",,arXiv,"['cs.cl', 'cs.ai']",, narrowing the gap between zero and fewshot machine translation by matching styles,"['Weiting Tan', 'Haoran Xu', 'Lingfeng Shen', 'Shuyue Stella Li', 'Kenton Murray', 'Philipp Koehn', 'Benjamin Van Durme', 'Yunmo Chen']",http://arxiv.org/pdf/2311.02310v1.pdf,2023-11-04,," Large language models trained primarily in a monolingual setting havedemonstrated their ability to generalize to machine translation using zero- andfew-shot examples with in-context learning. However, even though zero-shottranslations are relatively good, there remains a discernible gap comparingtheir performance with the few-shot setting. In this paper, we investigate thefactors contributing to this gap and find that this gap can largely be closed(for about 70%) by matching the writing styles of the target corpus.Additionally, we explore potential approaches to enhance zero-shot baselineswithout the need for parallel demonstration examples, providing valuableinsights into how these methods contribute to improving translation metrics.",,arXiv,['cs.cl'],, instructed language models with retrievers are powerful entity linkers,"['Zilin Xiao', 'Ming Gong', 'Jie Wu', 'Xingyao Zhang', 'Linjun Shou', 'Jian Pei', 'Daxin Jiang']",http://arxiv.org/pdf/2311.03250v1.pdf,2023-11-06,," Generative approaches powered by large language models (LLMs) havedemonstrated emergent abilities in tasks that require complex reasoningabilities. Yet the generative nature still makes the generated content sufferfrom hallucinations, thus unsuitable for entity-centric tasks like entitylinking (EL) requiring precise entity predictions over a large knowledge base.We present Instructed Generative Entity Linker (INSGENEL), the first approachthat enables casual language models to perform entity linking over knowledgebases. Several methods to equip language models with EL capability wereproposed in this work, including (i) a sequence-to-sequence training ELobjective with instruction-tuning, (ii) a novel generative EL framework basedon a light-weight potential mention retriever that frees the model from heavyand non-parallelizable decoding, achieving 4$\times$ speedup without compromiseon linking metrics. INSGENEL outperforms previous generative alternatives with+6.8 F1 points gain on average, also with a huge advantage in training dataefficiency and training compute consumption. In addition, our skillfullyengineered in-context learning (ICL) framework for EL still lags behindINSGENEL significantly, reaffirming that the EL task remains a persistenthurdle for general LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, metalearning via language model incontext tuning,"['Yanda Chen', 'Ruiqi Zhong', 'Sheng Zha', 'George Karypis', 'He He']",http://arxiv.org/pdf/2110.07814v2.pdf,2021-10-15,," The goal of meta-learning is to learn to adapt to a new task with only a fewlabeled examples. To tackle this problem in NLP, we propose $\textit{in-contexttuning}$, which recasts adaptation and prediction as a simple sequenceprediction problem: to form the input sequence, we concatenate the taskinstruction, the labeled examples, and the target input to predict; tometa-train the model to learn from in-context examples, we fine-tune apre-trained language model (LM) to predict the target label from the inputsequences on a collection of tasks. We benchmark our method on two collections of text classification tasks: LAMAand BinaryClfs. Compared to first-order MAML which adapts the model withgradient descent, our method better leverages the inductive bias of LMs toperform pattern matching, and outperforms MAML by an absolute $6\%$ AUC ROCscore on BinaryClfs, with increasing advantage w.r.t. model size. Compared tonon-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuningdirectly learns to learn from in-context examples. On BinaryClfs, in-contexttuning improves the average AUC-ROC score by an absolute $10\%$, and reducesthe variance with respect to example ordering by 6x and example choices by 2x.",,arXiv,"['cs.cl', 'cs.lg']",, glam efficient scaling of language models with mixtureofexperts,"['Nan Du', 'Yanping Huang', 'Andrew M. Dai', 'Simon Tong', 'Dmitry Lepikhin', 'Yuanzhong Xu', 'Maxim Krikun', 'Yanqi Zhou', 'Adams Wei Yu', 'Orhan Firat', 'Barret Zoph', 'Liam Fedus', 'Maarten Bosma', 'Zongwei Zhou', 'Tao Wang', 'Yu Emma Wang', 'Kellie Webster', 'Marie Pellat', 'Kevin Robinson', 'Kathleen Meier-Hellstern', 'Toju Duke', 'Lucas Dixon', 'Kun Zhang', 'Quoc V Le', 'Yonghui Wu', 'Zhifeng Chen', 'Claire Cui']",http://arxiv.org/pdf/2112.06905v2.pdf,2021-12-13,," Scaling language models with more data, compute and parameters has drivensignificant progress in natural language processing. For example, thanks toscaling, GPT-3 was able to achieve strong results on in-context learning tasks.However, training these large dense models requires significant amounts ofcomputing resources. In this paper, we propose and develop a family of languagemodels named GLaM (Generalist Language Model), which uses a sparsely activatedmixture-of-experts architecture to scale the model capacity while alsoincurring substantially less training cost compared to dense variants. Thelargest GLaM has 1.2 trillion parameters, which is approximately 7x larger thanGPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires halfof the computation flops for inference, while still achieving better overallzero-shot and one-shot performance across 29 NLP tasks.",,arXiv,['cs.cl'],, can language models learn from explanations in context,"['Andrew K. Lampinen', 'Ishita Dasgupta', 'Stephanie C. Y. Chan', 'Kory Matthewson', 'Michael Henry Tessler', 'Antonia Creswell', 'James L. McClelland', 'Jane X. Wang', 'Felix Hill']",http://arxiv.org/pdf/2204.02329v4.pdf,2022-04-05,," Language Models (LMs) can perform new tasks by adapting to a few in-contextexamples. For humans, explanations that connect examples to task principles canimprove learning. We therefore investigate whether explanations of few-shotexamples can help LMs. We annotate questions from 40 challenging tasks withanswer explanations, and various matched control explanations. We evaluate howdifferent types of explanations, instructions, and controls affect zero- andfew-shot performance. We analyze these results using statistical multilevelmodeling techniques that account for the nested dependencies among conditions,tasks, prompts, and models. We find that explanations can improve performance-- even without tuning. Furthermore, explanations hand-tuned for performance ona small validation set offer substantially larger benefits, and building aprompt by selecting examples and explanations together substantially improvesperformance over selecting examples alone. Finally, even untuned explanationsoutperform carefully matched controls, suggesting that the benefits are due tothe link between an example and its explanation, rather than lower-levelfeatures. However, only large models benefit. In summary, explanations cansupport the in-context learning of large LMs on challenging tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, large language models can implement policy iteration,"['Ethan Brooks', 'Logan Walls', 'Richard L. Lewis', 'Satinder Singh']",http://arxiv.org/pdf/2210.03821v2.pdf,2022-10-07,," This work presents In-Context Policy Iteration, an algorithm for performingReinforcement Learning (RL), in-context, using foundation models. While theapplication of foundation models to RL has received considerable attention,most approaches rely on either (1) the curation of expert demonstrations(either through manual design or task-specific pretraining) or (2) adaptationto the task of interest using gradient methods (either fine-tuning or trainingof adapter layers). Both of these techniques have drawbacks. Collectingdemonstrations is labor-intensive, and algorithms that rely on them do notoutperform the experts from which the demonstrations were derived. All gradienttechniques are inherently slow, sacrificing the ""few-shot"" quality that madein-context learning attractive to begin with. In this work, we present analgorithm, ICPI, that learns to perform RL tasks without expert demonstrationsor gradients. Instead we present a policy-iteration method in which the promptcontent is the entire locus of learning. ICPI iteratively updates the contentsof the prompt from which it derives its policy through trial-and-errorinteraction with an RL environment. In order to eliminate the role ofin-weights learning (on which approaches like Decision Transformer relyheavily), we demonstrate our algorithm using Codex, a language model with noprior knowledge of the domains on which we evaluate it.",,arXiv,['cs.lg'],, retrievalaugmented multimodal language modeling,"['Michihiro Yasunaga', 'Armen Aghajanyan', 'Weijia Shi', 'Rich James', 'Jure Leskovec', 'Percy Liang', 'Mike Lewis', 'Luke Zettlemoyer', 'Wen-tau Yih']",http://arxiv.org/pdf/2211.12561v2.pdf,2022-11-22,," Recent multimodal models such as DALL-E and CM3 have achieved remarkableprogress in text-to-image and image-to-text generation. However, these modelsstore all learned knowledge (e.g., the appearance of the Eiffel Tower) in themodel parameters, requiring increasingly larger models and training data tocapture more knowledge. To integrate knowledge in a more scalable and modularway, we propose a retrieval-augmented multimodal model, which enables a basemultimodal model (generator) to refer to relevant text and images fetched by aretriever from external memory (e.g., documents on the web). Specifically, forthe retriever, we use a pretrained CLIP, and for the generator, we train a CM3Transformer on the LAION dataset. Our resulting model, namedRetrieval-Augmented CM3 (RA-CM3), is the first multimodal model that canretrieve and generate both text and images. We show that RA-CM3 significantlyoutperforms baseline multimodal models such as DALL-E and CM3 on both image andcaption generation tasks (12 FID and 17 CIDEr improvements on MS-COCO), whilerequiring much less compute for training (<30% of DALL-E). Moreover, we showthat RA-CM3 exhibits novel capabilities, such as faithful image generation andmultimodal in-context learning (e.g., image generation from demonstrations).",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, "operationalizing specifications, in addition to test sets for evaluating constrained generative models","['Vikas Raunak', 'Matt Post', 'Arul Menezes']",http://arxiv.org/pdf/2212.00006v1.pdf,2022-11-19,," In this work, we present some recommendations on the evaluation ofstate-of-the-art generative models for constrained generation tasks. Theprogress on generative models has been rapid in recent years. These large-scalemodels have had three impacts: firstly, the fluency of generation in bothlanguage and vision modalities has rendered common average-case evaluationmetrics much less useful in diagnosing system errors. Secondly, the samesubstrate models now form the basis of a number of applications, driven both bythe utility of their representations as well as phenomena such as in-contextlearning, which raise the abstraction level of interacting with such models.Thirdly, the user expectations around these models and their feted publicreleases have made the technical challenge of out of domain generalization muchless excusable in practice. Subsequently, our evaluation methodologies haven'tadapted to these changes. More concretely, while the associated utility andmethods of interacting with generative models have expanded, a similarexpansion has not been observed in their evaluation practices. In this paper,we argue that the scale of generative models could be exploited to raise theabstraction level at which evaluation itself is conducted and providerecommendations for the same. Our recommendations are based on leveragingspecifications as a powerful instrument to evaluate generation quality and arereadily applicable to a variety of tasks.",,arXiv,"['cs.hc', 'cs.cl', 'cs.cv', 'cs.cy']",, lowresource authorship style transfer can nonfamous authors be imitated,"['Ajay Patel', 'Nicholas Andrews', 'Chris Callison-Burch']",http://arxiv.org/pdf/2212.08986v2.pdf,2022-12-18,," Authorship style transfer involves altering text to match the style of atarget author whilst preserving the original meaning. Existing unsupervisedapproaches like STRAP have largely focused on style transfer to target authorswith many examples of their writing style in books, speeches, or otherpublished works. This high-resource training data requirement (often greaterthan 100,000 words) makes these approaches primarily useful for style transferto published authors, politicians, or other well-known figures and authorshipstyles, while style transfer to non-famous authors has not been well-studied.We introduce the \textit{low-resource authorship style transfer} task, a morechallenging class of authorship style transfer where only a limited amount oftext in the target author's style may exist. In our experiments, wespecifically choose source and target authors from Reddit and style transfertheir Reddit posts, limiting ourselves to just 16 posts (on average ~500 words)of the target author's style. Style transfer accuracy is typically measured byhow often a classifier or human judge will classify an output as written by thetarget author. Recent authorship representations models excel at authorshipidentification even with just a few writing samples, making automaticevaluation of this task possible for the first time through evaluation metricswe propose. Our results establish an in-context learning technique we developas the strongest baseline, though we find current approaches do not yet achievemastery of this challenging task. We release our data and implementations toencourage further investigation.",,arXiv,['cs.cl'],, dialog2api taskoriented dialogue with api description and example programs,"['Raphael Shu', 'Elman Mansimov', 'Tamer Alkhouli', 'Nikolaos Pappas', 'Salvatore Romeo', 'Arshit Gupta', 'Saab Mansour', 'Yi Zhang', 'Dan Roth']",http://arxiv.org/pdf/2212.09946v1.pdf,2022-12-20,," Functionality and dialogue experience are two important factors oftask-oriented dialogue systems. Conventional approaches with closed schema(e.g., conversational semantic parsing) often fail as both the functionalityand dialogue experience are strongly constrained by the underlying schema. Weintroduce a new paradigm for task-oriented dialogue - Dialog2API - to greatlyexpand the functionality and provide seamless dialogue experience. Theconversational model interacts with the environment by generating and executingprograms triggering a set of pre-defined APIs. The model also manages thedialogue policy and interact with the user through generating appropriatenatural language responses. By allowing generating free-form programs,Dialog2API supports composite goals by combining different APIs, whereasunrestricted program revision provides natural and robust dialogue experience.To facilitate Dialog2API, the core model is provided with API documents, anexecution environment and optionally some example dialogues annotated withprograms. We propose an approach tailored for the Dialog2API, where thedialogue states are represented by a stack of programs, with most recentlymentioned program on the top of the stack. Dialog2API can work with manyapplication scenarios such as software automation and customer service. In thispaper, we construct a dataset for AWS S3 APIs and present evaluation results ofin-context learning baselines.",,arXiv,['cs.cl'],, hint hypernetwork instruction tuning for efficient zero & fewshot generalisation,"['Hamish Ivison', 'Akshita Bhagia', 'Yizhong Wang', 'Hannaneh Hajishirzi', 'Matthew Peters']",http://arxiv.org/pdf/2212.10315v2.pdf,2022-12-20,," Recent NLP models have shown the remarkable ability to effectively generalise`zero-shot' to new tasks using only natural language instructions as guidance.However, many of these approaches suffer from high computational costs due totheir reliance on concatenating lengthy instructions with every input example,resulting in costly reprocessing of the instruction. To avoid this, weintroduce Hypernetworks for INstruction Tuning (HINT), which convert taskinstructions and examples into parameter-efficient modules inserted into anunderlying model using a pretrained text encoder, eliminating the need toinclude instructions in the model input. The hypernetwork in HINT also producesan encoded instruction, which we concatenate with encoded inputs duringdecoding to further improve performance. HINT models outperform strongstate-of-the-art baselines by over 10% when controlling for compute (measuredin FLOPs). By converting instructions into modules, HINT models can effectivelydisregard the length of instructions and few-shot example inputs in terms ofcompute usage. As a result, HINT can enhance its performance by up to 25% byincorporating additional few-shot data, while utilizing only up to 5% morecompute. This combines the strengths of parameter-efficient fine-tuning andin-context learning.",,arXiv,['cs.cl'],, parallel context windows for large language models,"['Nir Ratner', 'Yoav Levine', 'Yonatan Belinkov', 'Ori Ram', 'Inbal Magar', 'Omri Abend', 'Ehud Karpas', 'Amnon Shashua', 'Kevin Leyton-Brown', 'Yoav Shoham']",http://arxiv.org/pdf/2212.10947v3.pdf,2022-12-21,," When applied to processing long text, Large Language Models (LLMs) arelimited by their context window. Existing efforts to address this limitationinvolve training specialized architectures, and cannot be easily applied tooff-the-shelf LLMs. We present Parallel Context Windows (PCW), a method thatalleviates the context window restriction for any off-the-shelf LLM withoutfurther training. The key to the approach is to carve a long context intochunks (``windows''), restrict the attention mechanism to apply only withineach window, and re-use the positional embeddings across the windows. Our mainresults test the PCW approach on in-context learning with models that range insize between 750 million and 178 billion parameters, and show substantialimprovements for tasks with diverse input and output spaces. We show additionalbenefits in other settings where long context windows may be beneficial:multi-hop questions and retrieval-augmented question answering with multipleretrieved documents. Our results highlight Parallel Context Windows as apromising method for applying off-the-shelf LLMs in a range of settings thatrequire long text sequences. We make our code publicly available athttps://github.com/ai21labs/parallel-context-windows.",,arXiv,['cs.cl'],, distinguishability calibration to incontext learning,"['Hongjing Li', 'Hanqi Yan', 'Yanran Li', 'Li Qian', 'Yulan He', 'Lin Gui']",http://arxiv.org/pdf/2302.06198v3.pdf,2023-02-13,," Recent years have witnessed increasing interests in prompt-based learning inwhich models can be trained on only a few annotated instances, making themsuitable in low-resource settings. When using prompt-based learning for textclassification, the goal is to use a pre-trained language model (PLM) topredict a missing token in a pre-defined template given an input text, whichcan be mapped to a class label. However, PLMs built on the transformerarchitecture tend to generate similar output embeddings, making it difficult todiscriminate between different class labels. The problem is further exacerbatedwhen dealing with classification tasks involving many fine-grained classlabels. In this work, we alleviate this information diffusion issue, i.e.,different tokens share a large proportion of similar information after goingthrough stacked multiple self-attention layers in a transformer, by proposing acalibration method built on feature transformations through rotation andscaling to map a PLM-encoded embedding into a new metric space to guarantee thedistinguishability of the resulting embeddings. Furthermore, we take theadvantage of hyperbolic embeddings to capture the hierarchical relations amongfine-grained class-associated token embedding by a coarse-to-fine metriclearning strategy to enhance the distinguishability of the learned outputembeddings. Extensive experiments on the three datasets under various settingsdemonstrate the effectiveness of our approach. Our code can be found athttps://github.com/donttal/TARA.",,arXiv,['cs.cl'],, do we still need clinical language models,"['Eric Lehman', 'Evan Hernandez', 'Diwakar Mahajan', 'Jonas Wulff', 'Micah J. Smith', 'Zachary Ziegler', 'Daniel Nadler', 'Peter Szolovits', 'Alistair Johnson', 'Emily Alsentzer']",http://arxiv.org/pdf/2302.08091v1.pdf,2023-02-16,," Although recent advances in scaling large language models (LLMs) haveresulted in improvements on many NLP tasks, it remains unclear whether thesemodels trained primarily with general web text are the right tool in highlyspecialized, safety critical domains such as clinical text. Recent results havesuggested that LLMs encode a surprising amount of medical knowledge. Thisraises an important question regarding the utility of smaller domain-specificlanguage models. With the success of general-domain LLMs, is there still a needfor specialized clinical models? To investigate this question, we conduct anextensive empirical analysis of 12 language models, ranging from 220M to 175Bparameters, measuring their performance on 3 different clinical tasks that testtheir ability to parse and reason over electronic health records. As part ofour experiments, we train T5-Base and T5-Large models from scratch on clinicalnotes from MIMIC III and IV to directly investigate the efficiency of clinicaltokens. We show that relatively small specialized clinical models substantiallyoutperform all in-context learning approaches, even when finetuned on limitedannotated data. Further, we find that pretraining on clinical tokens allows forsmaller, more parameter-efficient models that either match or outperform muchlarger language models trained on general text. We release the code and themodels used under the PhysioNet Credentialed Health Data license and data useagreement.",,arXiv,['cs.cl'],, epalm efficient perceptual augmentation of language models,"['Mustafa Shukor', 'Corentin Dancette', 'Matthieu Cord']",http://arxiv.org/pdf/2303.11403v4.pdf,2023-03-20,," Large Language Models (LLMs) have so far impressed the world, withunprecedented capabilities that emerge in models at large scales. On the visionside, transformer models (i.e., ViT) are following the same trend, achievingthe best performance on challenging benchmarks. With the abundance of suchunimodal models, a natural question arises; do we need also to follow thistrend to tackle multimodal tasks? In this work, we propose to rather directeffort to efficient adaptations of existing models, and propose to augmentLanguage Models with perception. Existing approaches for adapting pretrainedmodels for vision-language tasks still rely on several key components thathinder their efficiency. In particular, they still train a large number ofparameters, rely on large multimodal pretraining, use encoders (e.g., CLIP)trained on huge image-text datasets, and add significant inference overhead. Inaddition, most of these approaches have focused on Zero-Shot and In ContextLearning, with little to no effort on direct finetuning. We investigate theminimal computational effort needed to adapt unimodal models for multimodaltasks and propose a new challenging setup, alongside different approaches, thatefficiently adapts unimodal pretrained models. We show that by freezing morethan 99% of total parameters, training only one linear projection layer, andprepending only one trainable token, our approach (dubbed eP-ALM) significantlyoutperforms other baselines on VQA and Captioning across Image, Video, andAudio modalities, following the proposed setup. The code is available here:https://github.com/mshukor/eP-ALM.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, towards making the most of chatgpt for machine translation,"['Keqin Peng', 'Liang Ding', 'Qihuang Zhong', 'Li Shen', 'Xuebo Liu', 'Min Zhang', 'Yuanxin Ouyang', 'Dacheng Tao']",http://arxiv.org/pdf/2303.13780v4.pdf,2023-03-24,," ChatGPT shows remarkable capabilities for machine translation (MT). Severalprior studies have shown that it achieves comparable results to commercialsystems for high-resource languages, but lags behind in complex tasks, e.g.,low-resource and distant-language-pairs translation. However, they usuallyadopt simple prompts which can not fully elicit the capability of ChatGPT. Inthis paper, we aim to further mine ChatGPT's translation ability by revisitingseveral aspects: temperature, task information, and domain information, andcorrespondingly propose an optimal temperature setting and two (simple buteffective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts(DSP). We show that: 1) The performance of ChatGPT depends largely ontemperature, and a lower temperature usually can achieve better performance; 2)Emphasizing the task information can further improve ChatGPT's performance,particularly in complex MT tasks; 3) Introducing domain information can elicitChatGPT's generalization ability and improve its performance in the specificdomain; 4) ChatGPT tends to generate hallucinations for non-English-centric MTtasks, which can be partially addressed by our proposed prompts but still needto be highlighted for the MT/NLP community. We also explore the effects ofadvanced in-context learning strategies and find a (negative but interesting)observation: the powerful chain-of-thought prompt leads to word-by-wordtranslation behavior, thus bringing significant translation degradation.",,arXiv,['cs.cl'],, $k$nn prompting beyondcontext learning with calibrationfree nearest neighbor inference,"['Benfeng Xu', 'Quan Wang', 'Zhendong Mao', 'Yajuan Lyu', 'Qiaoqiao She', 'Yongdong Zhang']",http://arxiv.org/pdf/2303.13824v1.pdf,2023-03-24,," In-Context Learning (ICL), which formulates target tasks as prompt completionconditioned on in-context demonstrations, has become the prevailing utilizationof LLMs. In this paper, we first disclose an actual predicament for thistypical usage that it can not scale up with training data due to context lengthrestriction. Besides, existing works have shown that ICL also suffers fromvarious biases and requires delicate calibration treatment. To address bothchallenges, we advocate a simple and effective solution, $k$NN Prompting, whichfirst queries LLM with training data for distributed representations, thenpredicts test instances by simply referring to nearest neighbors. We conductcomprehensive experiments to demonstrate its two-fold superiority: 1)Calibration-Free: $k$NN Prompting does not directly align LLM outputdistribution with task-specific label space, instead leverages suchdistribution to align test and training instances. It significantly outperformsstate-of-the-art calibration-based methods under comparable few-shot scenario.2) Beyond-Context: $k$NN Prompting can further scale up effectively with asmany training data as are available, continually bringing substantialimprovements. The scaling trend holds across 10 orders of magnitude rangingfrom 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8Bto 30B. It successfully bridges data scaling into model scaling, and brings newpotentials for the gradient-free paradigm of LLM deployment. Code is publiclyavailable.",,arXiv,"['cs.cl', 'cs.ai']",, what makes good incontext demonstrations for code intelligence tasks with llms,"['Shuzheng Gao', 'Xin-Cheng Wen', 'Cuiyun Gao', 'Wenxuan Wang', 'Hongyu Zhang', 'Michael R. Lyu']",http://arxiv.org/pdf/2304.07575v2.pdf,2023-04-15,," Pre-trained models of source code have gained widespread popularity in manycode intelligence tasks. Recently, with the scaling of the model and corpussize, large language models have shown the ability of in-context learning(ICL). ICL employs task instructions and a few examples as demonstrations, andthen inputs the demonstrations to the language models for making predictions.This new learning paradigm is training-free and has shown impressiveperformance in various natural language processing and code intelligence tasks.However, the performance of ICL heavily relies on the quality ofdemonstrations, e.g., the selected examples. It is important to systematicallyinvestigate how to construct a good demonstration for code-related tasks. Inthis paper, we empirically explore the impact of three key factors on theperformance of ICL in code intelligence tasks: the selection, order, and numberof demonstration examples. We conduct extensive experiments on three codeintelligence tasks including code summarization, bug fixing, and programsynthesis. Our experimental results demonstrate that all the above threefactors dramatically impact the performance of ICL in code intelligence tasks.Additionally, we summarize our findings and provide takeaway suggestions on howto construct effective demonstrations, taking into account these threeperspectives. We also show that a carefully-designed demonstration based on ourfindings can lead to substantial improvements over widely-used demonstrationconstruction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%,175.96%, and 50.81% on code summarization, bug fixing, and program synthesis,respectively",,arXiv,['cs.se'],, controlled text generation with natural language instructions,"['Wangchunshu Zhou', 'Yuchen Eleanor Jiang', 'Ethan Wilcox', 'Ryan Cotterell', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2304.14293v2.pdf,2023-04-27,," Large language models generate fluent texts and can follow natural languageinstructions to solve a wide range of tasks without task-specific training.Nevertheless, it is notoriously difficult to control their generation tosatisfy the various constraints required by different applications. In thiswork, we present InstructCTG, a controlled text generation framework thatincorporates different constraints by conditioning on natural languagedescriptions and demonstrations of the constraints. In particular, we firstextract the underlying constraints of natural texts through a combination ofoff-the-shelf NLP tools and simple heuristics. We then verbalize theconstraints into natural language instructions to form weakly supervisedtraining data. By prepending natural language descriptions of the constraintsand a few demonstrations, we fine-tune a pre-trained language model toincorporate various types of constraints. Compared to existing search-based orscore-based methods, InstructCTG is more flexible to different constraint typesand has a much smaller impact on the generation quality and speed because itdoes not modify the decoding procedure. Additionally, InstructCTG allows themodel to adapt to new constraints without re-training through the use offew-shot task generalization and in-context learning abilities ofinstruction-tuned language models.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, using chatgpt for entity matching,"['Ralph Peeters', 'Christian Bizer']",http://arxiv.org/pdf/2305.03423v2.pdf,2023-05-05,," Entity Matching is the task of deciding if two entity descriptions refer tothe same real-world entity. State-of-the-art entity matching methods often relyon fine-tuning Transformer models such as BERT or RoBERTa. Two major drawbacksof using these models for entity matching are that (i) the models requiresignificant amounts of fine-tuning data for reaching a good performance and(ii) the fine-tuned models are not robust concerning out-of-distributionentities. In this paper, we investigate using ChatGPT for entity matching as amore robust, training data-efficient alternative to traditional Transformermodels. We perform experiments along three dimensions: (i) general promptdesign, (ii) in-context learning, and (iii) provision of higher-level matchingknowledge. We show that ChatGPT is competitive with a fine-tuned RoBERTa model,reaching a zero-shot performance of 82.35% F1 on a challenging matching task onwhich RoBERTa requires 2000 training examples for reaching a similarperformance. Adding in-context demonstrations to the prompts further improvesthe F1 by up to 7.85% when using similarity-based example selection. Alwaysusing the same set of 10 handpicked demonstrations leads to an improvement of4.92% over the zero-shot performance. Finally, we show that ChatGPT can also beguided by adding higher-level matching knowledge in the form of rules to theprompts. Providing matching rules leads to similar performance gains asproviding in-context demonstrations.",,arXiv,['cs.cl'],, joint foundation model caching and inference of generative ai services for edge intelligence,"['Minrui Xu', 'Dusit Niyato', 'Hongliang Zhang', 'Jiawen Kang', 'Zehui Xiong', 'Shiwen Mao', 'Zhu Han']",http://arxiv.org/pdf/2305.12130v1.pdf,2023-05-20,," With the rapid development of artificial general intelligence (AGI), variousmultimedia services based on pretrained foundation models (PFMs) need to beeffectively deployed. With edge servers that have cloud-level computing power,edge intelligence can extend the capabilities of AGI to mobile edge networks.However, compared with cloud data centers, resource-limited edge servers canonly cache and execute a small number of PFMs, which typically consist ofbillions of parameters and require intensive computing power and GPU memoryduring inference. To address this challenge, in this paper, we propose a jointfoundation model caching and inference framework that aims to balance thetradeoff among inference latency, accuracy, and resource consumption bymanaging cached PFMs and user requests efficiently during the provisioning ofgenerative AI services. Specifically, considering the in-context learningability of PFMs, a new metric named the Age of Context (AoC), is proposed tomodel the freshness and relevance between examples in past demonstrations andcurrent service requests. Based on the AoC, we propose a least context cachingalgorithm to manage cached PFMs at edge servers with historical prompts andinference results. The numerical results demonstrate that the proposedalgorithm can reduce system costs compared with existing baselines byeffectively utilizing contextual information.",,arXiv,['cs.ni'],, enhancing fewshot texttosql capabilities of large language models a study on prompt design strategies,"['Linyong Nan', 'Yilun Zhao', 'Weijin Zou', 'Narutatsu Ri', 'Jaesung Tae', 'Ellen Zhang', 'Arman Cohan', 'Dragomir Radev']",http://arxiv.org/pdf/2305.12586v1.pdf,2023-05-21,," In-context learning (ICL) has emerged as a new approach to various naturallanguage processing tasks, utilizing large language models (LLMs) to makepredictions based on context that has been supplemented with a few examples ortask-specific instructions. In this paper, we aim to extend this method toquestion answering tasks that utilize structured knowledge sources, and improveText-to-SQL systems by exploring various prompt design strategies for employingLLMs. We conduct a systematic investigation into different demonstrationselection methods and optimal instruction formats for prompting LLMs in theText-to-SQL task. Our approach involves leveraging the syntactic structure ofan example's SQL query to retrieve demonstrations, and we demonstrate thatpursuing both diversity and similarity in demonstration selection leads toenhanced performance. Furthermore, we show that LLMs benefit fromdatabase-related knowledge augmentations. Our most effective strategyoutperforms the state-of-the-art system by 2.5 points (Execution Accuracy) andthe best fine-tuned system by 5.1 points on the Spider dataset. These resultshighlight the effectiveness of our approach in adapting LLMs to the Text-to-SQLtask, and we present an analysis of the factors contributing to the success ofour strategy.",,arXiv,['cs.cl'],, exploring chainofthought style prompting for texttosql,"['Chang-You Tai', 'Ziru Chen', 'Tianshu Zhang', 'Xiang Deng', 'Huan Sun']",http://arxiv.org/pdf/2305.14215v2.pdf,2023-05-23,," In-context learning with large language models (LLMs) has recently caughtincreasing attention due to its superior few-shot performance on various tasks.However, its performance on text-to-SQL parsing still has much room forimprovement. In this paper, we hypothesize that a crucial aspect of LLMs toimprove for text-to-SQL parsing is their multi-step reasoning ability. Thus, wesystematically study how to enhance LLMs' reasoning ability through chain ofthought (CoT) style prompting, including the original chain-of-thoughtprompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023).Our experiments demonstrate that iterative prompting as in Zhou et al. (2023)may be unnecessary for text-to-SQL parsing, and using detailed reasoning stepstends to have more error propagation issues. Based on these findings, wepropose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2and 6.5 point absolute gains on the Spider development set and the SpiderRealistic set, respectively, compared to the standard prompting method withoutreasoning steps; 2.4 and 1.5 point absolute gains, compared to theleast-to-most prompting method.",,arXiv,['cs.cl'],, increasing probability mass on answer choices does not always improve accuracy,"['Sarah Wiegreffe', 'Matthew Finlayson', 'Oyvind Tafjord', 'Peter Clark', 'Ashish Sabharwal']",http://arxiv.org/pdf/2305.14596v2.pdf,2023-05-24,," When pretrained language models (LMs) are applied to discriminative taskssuch as multiple-choice questions, they place probability mass on vocabularytokens that aren't among the given answer choices. Spreading probability massacross multiple surface forms with identical meaning (such as ""bath"" and""bathtub"") is thought to cause an underestimation of a model's trueperformance, referred to as the ""surface form competition"" (SFC) hypothesis.This has motivated the introduction of various probability normalizationmethods. However, many core questions remain unanswered. How do we measure SFC?Are there direct ways of reducing it, and does doing so improve taskperformance? We propose a mathematical formalism for SFC which allows us to quantify andbound its impact for the first time. We identify a simple method for reducingit -- namely, increasing probability mass on the given answer choices by a)including them in the prompt and b) using in-context learning with even justone example. We show this method eliminates the impact of SFC in the majorityof instances. Our experiments on three diverse datasets and six LMs revealseveral additional surprising findings. For example, both normalization andprompting methods for reducing SFC can be ineffective or even detrimental totask performance for some LMs. We conclude with practical insights foreffectively prompting LMs for multiple-choice tasks.",,arXiv,"['cs.cl', 'cs.lg']",, universal selfadaptive prompting,"['Xingchen Wan', 'Ruoxi Sun', 'Hootan Nakhost', 'Hanjun Dai', 'Julian Martin Eisenschlos', 'Sercan O. Arik', 'Tomas Pfister']",http://arxiv.org/pdf/2305.14926v2.pdf,2023-05-24,," A hallmark of modern large language models (LLMs) is their impressive generalzero-shot and few-shot abilities, often elicited through in-context learning(ICL) via prompting. However, while highly coveted and being the most general,zero-shot performances in LLMs are still typically weaker due to the lack ofguidance and the difficulty of applying existing automatic prompt designmethods in general tasks when ground-truth labels are unavailable. In thisstudy, we address this by presenting Universal Self-Adaptive Prompting (USP),an automatic prompt design approach specifically tailored for zero-shotlearning (while compatible with few-shot). Requiring only a small amount ofunlabeled data and an inference-only LLM, USP is highly versatile: to achieveuniversal prompting, USP categorizes a possible NLP task into one of the threepossible task types and then uses a corresponding selector to select the mostsuitable queries and zero-shot model-generated responses aspseudo-demonstrations, thereby generalizing ICL to the zero-shot setup in afully automated way. We evaluate USP with PaLM and PaLM 2 models anddemonstrate performances that are considerably stronger than standard zero-shotbaselines and often comparable to or even superior to few-shot baselines acrossmore than 40 natural language understanding, natural language generation, andreasoning tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization,"['Aman Priyanshu', 'Supriti Vijay', 'Ayush Kumar', 'Rakshit Naidu', 'Fatemehsadat Mireshghallah']",http://arxiv.org/pdf/2305.15008v1.pdf,2023-05-24,," LLM-powered chatbots are becoming widely adopted in applications such ashealthcare, personal assistants, industry hiring decisions, etc. In many ofthese cases, chatbots are fed sensitive, personal information in their prompts,as samples for in-context learning, retrieved records from a database, or aspart of the conversation. The information provided in the prompt could directlyappear in the output, which might have privacy ramifications if there issensitive information there. As such, in this paper, we aim to understand theinput copying and regurgitation capabilities of these models during inferenceand how they can be directly instructed to limit this copying by complying withregulations such as HIPAA and GDPR, based on their internal knowledge of them.More specifically, we find that when ChatGPT is prompted to summarize coverletters of a 100 candidates, it would retain personally identifiableinformation (PII) verbatim in 57.4% of cases, and we find this retention to benon-uniform between different subgroups of people, based on attributes such asgender identity. We then probe ChatGPT's perception of privacy-related policiesand privatization mechanisms by directly instructing it to provide compliantoutputs and observe a significant omission of PII from output.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, finetuning language models with just forward passes,"['Sadhika Malladi', 'Tianyu Gao', 'Eshaan Nichani', 'Alex Damian', 'Jason D. Lee', 'Danqi Chen', 'Sanjeev Arora']",http://arxiv.org/pdf/2305.17333v3.pdf,2023-05-27,," Fine-tuning language models (LMs) has yielded success on diverse downstreamtasks, but as LMs grow in size, backpropagation requires a prohibitively largeamount of memory. Zeroth-order (ZO) methods can in principle estimate gradientsusing only two forward passes but are theorized to be catastrophically slow foroptimizing large models. In this work, we propose a memory-efficientzerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operatein-place, thereby fine-tuning LMs with the same memory footprint as inference.For example, with a single A100 80GB GPU, MeZO can train a 30-billion parametermodel, whereas fine-tuning with backpropagation can train only a 2.7B LM withthe same budget. We conduct comprehensive experiments across model types(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks(classification, multiple-choice, and generation). Our results demonstrate that(1) MeZO significantly outperforms in-context learning and linear probing; (2)MeZO achieves comparable performance to fine-tuning with backpropagation acrossmultiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reductionin our implementation; (3) MeZO is compatible with both full-parameter andparameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZOcan effectively optimize non-differentiable objectives (e.g., maximizingaccuracy or F1). We support our empirical findings with theoretical insights,highlighting how adequate pre-training and task prompts enable MeZO tofine-tune huge models, despite classical ZO analyses suggesting otherwise.",,arXiv,"['cs.lg', 'cs.cl']",, improving clip training with language rewrites,"['Lijie Fan', 'Dilip Krishnan', 'Phillip Isola', 'Dina Katabi', 'Yonglong Tian']",http://arxiv.org/pdf/2305.20088v2.pdf,2023-05-31,," Contrastive Language-Image Pre-training (CLIP) stands as one of the mosteffective and scalable methods for training transferable vision models usingpaired image and text data. CLIP models are trained using contrastive loss,which typically relies on data augmentations to prevent overfitting andshortcuts. However, in the CLIP training paradigm, data augmentations areexclusively applied to image inputs, while language inputs remain unchangedthroughout the entire training process, limiting the exposure of diverse textsto the same image. In this paper, we introduce Language augmented CLIP(LaCLIP), a simple yet highly effective approach to enhance CLIP trainingthrough language rewrites. Leveraging the in-context learning capability oflarge language models, we rewrite the text descriptions associated with eachimage. These rewritten texts exhibit diversity in sentence structure andvocabulary while preserving the original key concepts and meanings. Duringtraining, LaCLIP randomly selects either the original texts or the rewrittenversions as text augmentations for each image. Extensive experiments on CC3M,CC12M, RedCaps and LAION-400M datasets show that CLIP pre-training withlanguage rewrites significantly improves the transfer performance withoutcomputation or memory overhead during training. Specifically for ImageNetzero-shot accuracy, LaCLIP outperforms CLIP by 8.2% on CC12M and 2.4% onLAION-400M. Code is available at https://github.com/LijieFan/LaCLIP.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, sqlpalm improved large language model adaptation for texttosql,"['Ruoxi Sun', 'Sercan O. Arik', 'Hootan Nakhost', 'Hanjun Dai', 'Rajarishi Sinha', 'Pengcheng Yin', 'Tomas Pfister']",http://arxiv.org/pdf/2306.00739v3.pdf,2023-05-26,," One impressive emergent capability of large language models (LLMs) isgeneration of code, including Structured Query Language (SQL) for databases.For the task of converting natural language text to SQL queries, Text-to-SQL,adaptation of LLMs is of paramount importance, both in in-context learning andfine-tuning settings, depending on the amount of adaptation data used. In thispaper, we propose an LLM-based Text-to-SQL model SQL-PaLM, leveraging onPaLM-2, that pushes the state-of-the-art in both settings. Few-shot SQL-PaLM isbased on an execution-based self-consistency prompting approach designed forText-to-SQL, and achieves 77.3% in test-suite accuracy on Spider, which to ourbest knowledge is the first to outperform previous state-of-the-art withfine-tuning by a significant margin, 4%. Furthermore, we demonstrate that thefine-tuned SQL-PALM outperforms it further by another 1%. Towards applyingSQL-PaLM to real-world scenarios we further evaluate its robustness on otherchallenging variants of Spider and demonstrate the superior generalizationcapability of SQL-PaLM. In addition, via extensive case studies, we demonstratethe impressive intelligent capabilities and various success enablers ofLLM-based Text-to-SQL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.db']",, zeroshot 3d shape correspondence,"['Ahmed Abdelreheem', 'Abdelrahman Eldesokey', 'Maks Ovsjanikov', 'Peter Wonka']",http://arxiv.org/pdf/2306.03253v2.pdf,2023-06-05,," We propose a novel zero-shot approach to computing correspondences between 3Dshapes. Existing approaches mainly focus on isometric and near-isometric shapepairs (e.g., human vs. human), but less attention has been given to stronglynon-isometric and inter-class shape matching (e.g., human vs. cow). To thisend, we introduce a fully automatic method that exploits the exceptionalreasoning capabilities of recent foundation models in language and vision totackle difficult shape correspondence problems. Our approach comprises multiplestages. First, we classify the 3D shapes in a zero-shot manner by feedingrendered shape views to a language-vision model (e.g., BLIP2) to generate alist of class proposals per shape. These proposals are unified into a singleclass per shape by employing the reasoning capabilities of ChatGPT. Second, weattempt to segment the two shapes in a zero-shot manner, but in contrast to theco-segmentation problem, we do not require a mutual set of semantic regions.Instead, we propose to exploit the in-context learning capabilities of ChatGPTto generate two different sets of semantic regions for each shape and asemantic mapping between them. This enables our approach to match stronglynon-isometric shapes with significant differences in geometric structure.Finally, we employ the generated semantic mapping to produce coarsecorrespondences that can further be refined by the functional maps framework toproduce dense point-to-point maps. Our approach, despite its simplicity,produces highly plausible results in a zero-shot manner, especially betweenstrongly non-isometric shapes. Project webpage:https://samir55.github.io/3dshapematch/.",,arXiv,['cs.cv'],, mimicit multimodal incontext instruction tuning,"['Bo Li', 'Yuanhan Zhang', 'Liangyu Chen', 'Jinghao Wang', 'Fanyi Pu', 'Jingkang Yang', 'Chunyuan Li', 'Ziwei Liu']",http://arxiv.org/pdf/2306.05425v1.pdf,2023-06-08,," High-quality instructions and responses are essential for the zero-shotperformance of large language models on interactive natural language tasks. Forinteractive vision-language tasks involving intricate visual scenes, a largequantity of diverse and creative instruction-response pairs should beimperative to tune vision-language models (VLMs). Nevertheless, the currentavailability of vision-language instruction-response pairs in terms ofquantity, diversity, and creativity remains limited, posing challenges to thegeneralization of interactive VLMs. Here we present MultI-Modal In-ContextInstruction Tuning (MIMIC-IT), a dataset comprising 2.8 million multimodalinstruction-response pairs, with 2.2 million unique instructions derived fromimages and videos. Each pair is accompanied by multi-modal in-contextinformation, forming conversational contexts aimed at empowering VLMs inperception, reasoning, and planning. The instruction-response collectionprocess, dubbed as Syphus, is scaled using an automatic annotation pipelinethat combines human expertise with GPT's capabilities. Using the MIMIC-ITdataset, we train a large VLM named Otter. Based on extensive evaluationsconducted on vision-language benchmarks, it has been observed that Otterdemonstrates remarkable proficiency in multi-modal perception, reasoning, andin-context learning. Human evaluation reveals it effectively aligns with theuser's intentions. We release the MIMIC-IT dataset, instruction-responsecollection pipeline, benchmarks, and the Otter model.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.hc']",, medfmc a realworld dataset and benchmark for foundation model adaptation in medical image classification,"['Dequan Wang', 'Xiaosong Wang', 'Lilong Wang', 'Mengzhang Li', 'Qian Da', 'Xiaoqiang Liu', 'Xiangyu Gao', 'Jun Shen', 'Junjun He', 'Tian Shen', 'Qi Duan', 'Jie Zhao', 'Kang Li', 'Yu Qiao', 'Shaoting Zhang']",http://arxiv.org/pdf/2306.09579v1.pdf,2023-06-16,," Foundation models, often pre-trained with large-scale data, have achievedparamount success in jump-starting various vision and language applications.Recent advances further enable adapting foundation models in downstream tasksefficiently using only a few training samples, e.g., in-context learning. Yet,the application of such learning paradigms in medical image analysis remainsscarce due to the shortage of publicly accessible data and benchmarks. In thispaper, we aim at approaches adapting the foundation models for medical imageclassification and present a novel dataset and benchmark for the evaluation,i.e., examining the overall performance of accommodating the large-scalefoundation models downstream on a set of diverse real-world clinical tasks. Wecollect five sets of medical imaging data from multiple institutes targeting avariety of real-world clinical tasks (22,349 images in total), i.e., thoracicdiseases screening in X-rays, pathological lesion tissue screening, lesiondetection in endoscopy images, neonatal jaundice evaluation, and diabeticretinopathy grading. Results of multiple baseline methods are demonstratedusing the proposed dataset from both accuracy and cost-effective perspectives.",,arXiv,['cs.cv'],, jiuzhang 20 a unified chinese pretrained language model for multitask mathematical problem solving,"['Wayne Xin Zhao', 'Kun Zhou', 'Beichen Zhang', 'Zheng Gong', 'Zhipeng Chen', 'Yuanhang Zhou', 'Ji-Rong Wen', 'Jing Sha', 'Shijin Wang', 'Cong Liu', 'Guoping Hu']",http://arxiv.org/pdf/2306.11027v1.pdf,2023-06-19,," Although pre-trained language models~(PLMs) have recently advanced theresearch progress in mathematical reasoning, they are not specially designed asa capable multi-task solver, suffering from high cost for multi-task deployment(\eg a model copy for a task) and inferior performance on complex mathematicalproblems in practical applications. To address these issues, in this paper, wepropose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-taskmathematical problem solving. Our idea is to maintain a moderate-sized modeland employ the \emph{cross-task knowledge sharing} to improve the modelcapacity in a multi-task setting. Specially, we construct aMixture-of-Experts~(MoE) architecture for modeling mathematical text, so as tocapture the common mathematical knowledge across tasks. For optimizing the MoEarchitecture, we design \emph{multi-task continual pre-training} and\emph{multi-task fine-tuning} strategies for multi-task adaptation. Thesetraining strategies can effectively decompose the knowledge from the task dataand establish the cross-task sharing via expert networks. In order to furtherimprove the general capacity of solving different complex tasks, we leveragelarge language models~(LLMs) as complementary models to iteratively refine thegenerated solution by our PLM, via in-context learning. Extensive experimentshave demonstrated the effectiveness of our model.",,arXiv,"['cs.cl', 'cs.ai']",, a chain of aibased solutions for resolving fqns and fixing syntax errors in partial code,"['Qing Huang', 'Jiahui Zhu', 'Zhenchang Xing', 'Huan Jin', 'Changjing Wang', 'Xiwei Xu']",http://arxiv.org/pdf/2306.11981v1.pdf,2023-06-21,," API documentation, technical blogs and programming Q&A sites contain numerouspartial code that can be reused in programming tasks, but often these code areuncompilable due to unresolved names and syntax errors. To facilitate partialcode reuse, we propose the Partial Code Reuse Chain (PCR-Chain) for resolvingfully-qualified names (FQNs) and fixing last-mile syntax errors in partial codebased on a giant large language model (LLM) like ChatGPT. Methodologically,PCR-Chain is backed up by the underlying global-level prompt architecture(which combines three design ideas: hierarchical task breakdown, promptcomposition, and a mix of prompt-based AI and non-AI units) and the local-levelprompt design. Technically, we propose PCR-Chain, which employs in-contextlearning rather than symbolic, costly training methods. Experimental resultsdemonstrate that in dynamically-typed languages (Python), PCR-Chain outperformscurrent state-of-the-art (SOTA) 5% accuracy like RING. For statically-typelanguages (Java), our approach achieves high accuracy of 80.5% in resolvingboth non-FQNs and last-mile syntax errors, surpassing SOTA methods (RING) thatcan only address last-mile syntax errors. The correct execution of the unit,module, and PCR-Chain demonstrates the effectiveness of the prompt design,composition, and architecture and opens up possibilities for building softwareengineering tools based on LLMs, replacing traditional program analysismethods.",,arXiv,['cs.se'],, kosmos2 grounding multimodal large language models to the world,"['Zhiliang Peng', 'Wenhui Wang', 'Li Dong', 'Yaru Hao', 'Shaohan Huang', 'Shuming Ma', 'Furu Wei']",http://arxiv.org/pdf/2306.14824v3.pdf,2023-06-26,," We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling newcapabilities of perceiving object descriptions (e.g., bounding boxes) andgrounding text to the visual world. Specifically, we represent referexpressions as links in Markdown, i.e., ``[text span](bounding boxes)'', whereobject descriptions are sequences of location tokens. Together with multimodalcorpora, we construct large-scale data of grounded image-text pairs (calledGrIT) to train the model. In addition to the existing capabilities of MLLMs(e.g., perceiving general modalities, following instructions, and performingin-context learning), Kosmos-2 integrates the grounding capability intodownstream applications. We evaluate Kosmos-2 on a wide range of tasks,including (i) multimodal grounding, such as referring expression comprehension,and phrase grounding, (ii) multimodal referring, such as referring expressiongeneration, (iii) perception-language tasks, and (iv) language understandingand generation. This work lays out the foundation for the development ofEmbodiment AI and sheds light on the big convergence of language, multimodalperception, action, and world modeling, which is a key step toward artificialgeneral intelligence. Code and pretrained models are available athttps://aka.ms/kosmos-2.",,arXiv,"['cs.cl', 'cs.cv']",, a gpt4 reticular chemist for guiding mof discovery,"['Zhiling Zheng', 'Zichao Rong', 'Nakul Rampal', 'Christian Borgs', 'Jennifer T. Chayes', 'Omar M. Yaghi']",http://arxiv.org/pdf/2306.14915v2.pdf,2023-06-20,," We present a new framework integrating the AI model GPT-4 into the iterativeprocess of reticular chemistry experimentation, leveraging a cooperativeworkflow of interaction between AI and a human researcher. This GPT-4 ReticularChemist is an integrated system composed of three phases. Each of theseutilizes GPT-4 in various capacities, wherein GPT-4 provides detailedinstructions for chemical experimentation and the human provides feedback onthe experimental outcomes, including both success and failures, for thein-context learning of AI in the next iteration. This iterative human-AIinteraction enabled GPT-4 to learn from the outcomes, much like an experiencedchemist, by a prompt-learning strategy. Importantly, the system is based onnatural language for both development and operation, eliminating the need forcoding skills, and thus, make it accessible to all chemists. Our collaborationwith GPT-4 Reticular Chemist guided the discovery of an isoreticular series ofMOFs, with each synthesis fine-tuned through iterative feedback and expertsuggestions. This workflow presents a potential for broader applications inscientific research by harnessing the capability of large language models likeGPT-4 to enhance the feasibility and efficiency of research activities.",,arXiv,"['cs.ai', 'cond-mat.mtrl-sci', 'physics.chem-ph']",, voicebox textguided multilingual universal speech generation at scale,"['Matthew Le', 'Apoorv Vyas', 'Bowen Shi', 'Brian Karrer', 'Leda Sari', 'Rashel Moritz', 'Mary Williamson', 'Vimal Manohar', 'Yossi Adi', 'Jay Mahadeokar', 'Wei-Ning Hsu']",http://arxiv.org/pdf/2306.15687v2.pdf,2023-06-23,," Large-scale generative models such as GPT and DALL-E have revolutionized theresearch community. These models not only generate high fidelity outputs, butare also generalists which can solve tasks not explicitly taught. In contrast,speech generative models are still primitive in terms of scale and taskgeneralization. In this paper, we present Voicebox, the most versatiletext-guided generative model for speech at scale. Voicebox is anon-autoregressive flow-matching model trained to infill speech, given audiocontext and text, trained on over 50K hours of speech that are not filtered orenhanced. Similar to GPT, Voicebox can perform many different tasks throughin-context learning, but is more flexible as it can also condition on futurecontext. Voicebox can be used for mono or cross-lingual zero-shottext-to-speech synthesis, noise removal, content editing, style conversion, anddiverse sample generation. In particular, Voicebox outperforms thestate-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9% vs1.9% word error rates) and audio similarity (0.580 vs 0.681) while being up to20 times faster. Audio samples can be found in\url{https://voicebox.metademolab.com}.",,arXiv,"['eess.as', 'cs.cl', 'cs.lg', 'cs.sd']",, spae semantic pyramid autoencoder for multimodal generation with frozen llms,"['Lijun Yu', 'Yong Cheng', 'Zhiruo Wang', 'Vivek Kumar', 'Wolfgang Macherey', 'Yanping Huang', 'David A. Ross', 'Irfan Essa', 'Yonatan Bisk', 'Ming-Hsuan Yang', 'Kevin Murphy', 'Alexander G. Hauptmann', 'Lu Jiang']",http://arxiv.org/pdf/2306.17842v3.pdf,2023-06-30,," In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enablingfrozen LLMs to perform both understanding and generation tasks involvingnon-linguistic modalities such as images or videos. SPAE converts between rawpixels and interpretable lexical tokens (or words) extracted from the LLM'svocabulary. The resulting tokens capture both the semantic meaning and thefine-grained details needed for visual reconstruction, effectively translatingthe visual content into a language comprehensible to the LLM, and empowering itto perform a wide array of multimodal tasks. Our approach is validated throughin-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse setof image understanding and generation tasks. Our method marks the firstsuccessful attempt to enable a frozen LLM to generate image content whilesurpassing state-of-the-art performance in image understanding tasks, under thesame setting, by over 25%.",,arXiv,"['cs.cv', 'cs.cl', 'cs.mm']",, recallm an adaptable memory mechanism with temporal understanding for large language models,"['Brandon Kynoch', 'Hugo Latapie', 'Dwane van der Sluis']",http://arxiv.org/pdf/2307.02738v3.pdf,2023-07-06,," Large Language Models (LLMs) have made extraordinary progress in the field ofArtificial Intelligence and have demonstrated remarkable capabilities across alarge variety of tasks and domains. However, as we venture closer to creatingArtificial General Intelligence (AGI) systems, we recognize the need tosupplement LLMs with long-term memory to overcome the context window limitationand more importantly, to create a foundation for sustained reasoning,cumulative learning and long-term user interaction. In this paper we proposeRecallM, a novel architecture for providing LLMs with an adaptable andupdatable long-term memory mechanism. Unlike previous methods, the RecallMarchitecture is particularly effective at belief updating and maintaining atemporal understanding of the knowledge provided to it. We demonstrate throughvarious experiments the effectiveness of this architecture. Furthermore,through our own temporal understanding and belief updating experiments, we showthat RecallM is four times more effective than using a vector database forupdating knowledge previously stored in long-term memory. We also demonstratethat RecallM shows competitive performance on general question-answering andin-context learning tasks.",,arXiv,"['cs.ai', 'cs.cl', 'cs.sc']",, large language models as general pattern machines,"['Suvir Mirchandani', 'Fei Xia', 'Pete Florence', 'Brian Ichter', 'Danny Driess', 'Montserrat Gonzalez Arenas', 'Kanishka Rao', 'Dorsa Sadigh', 'Andy Zeng']",http://arxiv.org/pdf/2307.04721v2.pdf,2023-07-10,," We observe that pre-trained large language models (LLMs) are capable ofautoregressively completing complex token sequences -- from arbitrary onesprocedurally generated by probabilistic context-free grammars (PCFG), to morerich spatial patterns found in the Abstraction and Reasoning Corpus (ARC), ageneral AI benchmark, prompted in the style of ASCII art. Surprisingly, patterncompletion proficiency can be partially retained even when the sequences areexpressed using tokens randomly sampled from the vocabulary. These resultssuggest that without any additional training, LLMs can serve as generalsequence modelers, driven by in-context learning. In this work, we investigatehow these zero-shot capabilities may be applied to problems in robotics -- fromextrapolating sequences of numbers that represent states over time to completesimple motions, to least-to-most prompting of reward-conditioned trajectoriesthat can discover and represent closed-loop policies (e.g., a stabilizingcontroller for CartPole). While difficult to deploy today for real systems dueto latency, context size limitations, and compute costs, the approach of usingLLMs to drive low-level control may provide an exciting glimpse into how thepatterns among words could be transferred to actions.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ro']",, megatts 2 zeroshot texttospeech with arbitrary length speech prompts,"['Ziyue Jiang', 'Jinglin Liu', 'Yi Ren', 'Jinzheng He', 'Chen Zhang', 'Zhenhui Ye', 'Pengfei Wei', 'Chunfeng Wang', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao']",http://arxiv.org/pdf/2307.07218v2.pdf,2023-07-14,," Zero-shot text-to-speech aims at synthesizing voices with unseen speechprompts. Previous large-scale multispeaker TTS models have successfullyachieved this goal with an enrolled recording within 10 seconds. However, mostof them are designed to utilize only short speech prompts. The limitedinformation in short speech prompts significantly hinders the performance offine-grained identity imitation. In this paper, we introduce Mega-TTS 2, ageneric zero-shot multispeaker TTS model that is capable of synthesizing speechfor unseen speakers with arbitrary-length prompts. Specifically, we 1) design amulti-reference timbre encoder to extract timbre information from multiplereference speeches; 2) and train a prosody language model with arbitrary-lengthspeech prompts; With these designs, our model is suitable for prompts ofdifferent lengths, which extends the upper bound of speech quality forzero-shot text-to-speech. Besides arbitrary-length prompts, we introducearbitrary-source prompts, which leverages the probabilities derived frommultiple P-LLM outputs to produce expressive and controlled prosody.Furthermore, we propose a phoneme-level auto-regressive duration model tointroduce in-context learning capabilities to duration modeling. Experimentsdemonstrate that our method could not only synthesize identity-preservingspeech with a short prompt of an unseen speaker but also achieve improvedperformance with longer speech prompts. Audio samples can be found inhttps://mega-tts.github.io/mega2_demo/.",,arXiv,"['eess.as', 'cs.sd']",, do emergent abilities exist in quantized large language models an empirical study,"['Peiyu Liu', 'Zikang Liu', 'Ze-Feng Gao', 'Dawei Gao', 'Wayne Xin Zhao', 'Yaliang Li', 'Bolin Ding', 'Ji-Rong Wen']",http://arxiv.org/pdf/2307.08072v2.pdf,2023-07-16,," Despite the superior performance, Large Language Models~(LLMs) requiresignificant computational resources for deployment and use. To overcome thisissue, quantization methods have been widely applied to reduce the memoryfootprint of LLMs as well as increasing the inference rate. However, a majorchallenge is that low-bit quantization methods often lead to performancedegradation. It is important to understand how quantization impacts thecapacity of LLMs. Different from previous studies focused on overallperformance, this work aims to investigate the impact of quantization on\emph{emergent abilities}, which are important characteristics that distinguishLLMs from small language models. Specially, we examine the abilities ofin-context learning, chain-of-thought reasoning, and instruction-following inquantized LLMs. Our empirical experiments show that these emergent abilitiesstill exist in 4-bit quantization models, while 2-bit models encounter severeperformance degradation on the test of these abilities. To improve theperformance of low-bit models, we conduct two special experiments: (1)fine-gained impact analysis that studies which components (or substructures)are more sensitive to quantization, and (2) performance compensation throughmodel fine-tuning. Our work derives a series of important findings tounderstand the impact of quantization on emergent abilities, and sheds lightson the possibilities of extremely low-bit quantization for LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, generating mathematical derivations with large language models,"['Jordan Meadows', 'Marco Valentino', 'Andre Freitas']",http://arxiv.org/pdf/2307.09998v3.pdf,2023-07-19,," The derivation of mathematical results in specialised fields, using LargeLanguage Models (LLMs), is an emerging research direction that can helpidentify models' limitations, and potentially support mathematical discovery.In this paper, we leverage a symbolic engine to generate derivations ofequations at scale, and investigate the capabilities of LLMs when deriving goalequations from premises. Specifically, we employ in-context learning for GPTand fine-tune a range of T5 models to compare the robustness and generalisationof pre-training strategies to specialised models. Empirical results show thatfine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static andout-of-distribution test sets in conventional scores. However, an in-depthanalysis reveals that the fine-tuned models are more sensitive to perturbationsinvolving unseen symbols and (to a lesser extent) changes to equationstructure. In addition, we analyse 1.7K equations, and over 200 derivations, tohighlight common reasoning errors such as the inclusion of incorrect,irrelevant, and redundant equations. Finally, we explore the suitability ofexisting metrics for evaluating mathematical derivations and find evidencethat, while they can capture general properties such as sensitivity toperturbations, they fail to highlight fine-grained reasoning errors andessential differences between models. Overall, this work demonstrates thattraining models on synthetic data may improve their math capabilities beyondmuch larger LLMs, but current metrics are not appropriately assessing thequality of generated mathematical text.",,arXiv,"['cs.cl', 'math.ho']",, layoutllmt2i eliciting layout guidance from llm for texttoimage generation,"['Leigang Qu', 'Shengqiong Wu', 'Hao Fei', 'Liqiang Nie', 'Tat-Seng Chua']",http://arxiv.org/pdf/2308.05095v2.pdf,2023-08-09,," In the text-to-image generation field, recent remarkable progress in StableDiffusion makes it possible to generate rich kinds of novel photorealisticimages. However, current models still face misalignment issues (e.g.,problematic spatial relation understanding and numeration failure) in complexnatural scenes, which impedes the high-faithfulness text-to-image generation.Although recent efforts have been made to improve controllability by givingfine-grained guidance (e.g., sketch and scribbles), this issue has not beenfundamentally tackled since users have to provide such guidance informationmanually. In this work, we strive to synthesize high-fidelity images that aresemantically aligned with a given textual prompt without any guidance. Towardthis end, we propose a coarse-to-fine paradigm to achieve layout planning andimage generation. Concretely, we first generate the coarse-grained layoutconditioned on a given textual prompt via in-context learning based on LargeLanguage Models. Afterward, we propose a fine-grained object-interactiondiffusion method to synthesize high-faithfulness images conditioned on theprompt and the automatically generated layout. Extensive experimentsdemonstrate that our proposed method outperforms the state-of-the-art models interms of layout and image generation. Our code and settings are available athttps://layoutllm-t2i.github.io.",,arXiv,"['cs.cv', 'cs.ai']",, audioldm 2 learning holistic audio generation with selfsupervised pretraining,"['Haohe Liu', 'Qiao Tian', 'Yi Yuan', 'Xubo Liu', 'Xinhao Mei', 'Qiuqiang Kong', 'Yuping Wang', 'Wenwu Wang', 'Yuxuan Wang', 'Mark D. Plumbley']",http://arxiv.org/pdf/2308.05734v2.pdf,2023-08-10,," Although audio generation shares commonalities across different types ofaudio, such as speech, music, and sound effects, designing models for each typerequires careful consideration of specific objectives and biases that cansignificantly differ from those of other types. To bring us closer to a unifiedperspective of audio generation, this paper proposes a framework that utilizesthe same learning method for speech, music, and sound effect generation. Ourframework introduces a general representation of audio, called ""language ofaudio"" (LOA). Any audio can be translated into LOA based on AudioMAE, aself-supervised pre-trained representation learning model. In the generationprocess, we translate any modalities into LOA by using a GPT-2 model, and weperform self-supervised audio generation learning with a latent diffusion modelconditioned on LOA. The proposed framework naturally brings advantages such asin-context learning abilities and reusable self-supervised pretrained AudioMAEand latent diffusion models. Experiments on the major benchmarks oftext-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-artor competitive performance against previous approaches. Our code, pretrainedmodel, and demo are available at https://audioldm.github.io/audioldm2.",,arXiv,"['cs.sd', 'cs.ai', 'cs.mm', 'eess.as', 'eess.sp']",, time travel in llms tracing data contamination in large language models,"['Shahriar Golchin', 'Mihai Surdeanu']",http://arxiv.org/pdf/2308.08493v2.pdf,2023-08-16,," Data contamination, i.e., the presence of test data from downstream tasks inthe training data of large language models (LLMs), is a potential major issuein measuring LLMs' real effectiveness on other tasks. We propose astraightforward yet effective method for identifying data contamination withinLLMs. At its core, our approach starts by identifying potential contaminationat the instance level; using this information, our approach then assesses widercontamination at the partition level. To estimate contamination of individualinstances, we employ ""guided instruction:"" a prompt consisting of the datasetname, partition type, and the random-length initial segment of a referenceinstance, asking the LLM to complete it. An instance is flagged as contaminatedif the LLM's output either exactly or nearly matches the latter segment of thereference. To understand if an entire partition is contaminated, we propose twoideas. The first idea marks a dataset partition as contaminated if the averageoverlap score with the reference instances (as measured by ROUGE-L or BLEURT)is statistically significantly better with the completions from guidedinstruction compared to a ""general instruction"" that does not include thedataset and partition name. The second idea marks a dataset partition ascontaminated if a classifier based on GPT-4 with few-shot in-context learningprompt marks multiple generated completions as exact/near-exact matches of thecorresponding reference instances. Our best method achieves an accuracy between92% and 100% in detecting if an LLM is contaminated with seven datasets,containing train and test/validation partitions, when contrasted with manualevaluation by human experts. Further, our findings indicate that GPT-4 iscontaminated with AG News, WNLI, and XSum datasets.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",, inductivebias learning generating code models with large language model,"['Toma Tanaka', 'Naofumi Emoto', 'Tsukasa Yumibayashi']",http://arxiv.org/pdf/2308.09890v1.pdf,2023-08-19,," Large Language Models(LLMs) have been attracting attention due to a abilitycalled in-context learning(ICL). ICL, without updating the parameters of a LLM,it is possible to achieve highly accurate inference based on rules ``in thecontext'' by merely inputting a training data into the prompt. Although ICL isa developing field with many unanswered questions, LLMs themselves serves as ainference model, seemingly realizing inference without explicitly indicate``inductive bias''. On the other hand, a code generation is also a highlightedapplication of LLMs. The accuracy of code generation has dramatically improved,enabling even non-engineers to generate code to perform the desired tasks bycrafting appropriate prompts. In this paper, we propose a novel ``learning''method called an ``Inductive-Bias Learning (IBL)'', which combines thetechniques of ICL and code generation. An idea of IBL is straightforward. LikeICL, IBL inputs a training data into the prompt and outputs a code with anecessary structure for inference (we referred to as ``Code Model'') from a``contextual understanding''. Despite being a seemingly simple approach, IBLencompasses both a ``property of inference without explicit inductive bias''inherent in ICL and a ``readability and explainability'' of the codegeneration. Surprisingly, generated Code Models have been found to achievepredictive accuracy comparable to, and in some cases surpassing, ICL andrepresentative machine learning models. Our IBL code is open source:https://github.com/fuyu-quant/IBLM",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",, exploring parameterefficient finetuning techniques for code generation with large language models,"['Martin Weyssow', 'Xin Zhou', 'Kisub Kim', 'David Lo', 'Houari Sahraoui']",http://arxiv.org/pdf/2308.10462v2.pdf,2023-08-21,," Large Language Models (LLMs) demonstrate impressive capabilities to generateaccurate code snippets given natural language intents in zero-shot, i.e.,without the need for specific fine-tuning. While prior studies have highlightedthe advantages of fine-tuning LLMs, this process incurs high computationalcosts, making it impractical in resource-scarce environments, particularly formodels with billions of parameters. To address these challenges, previousresearch explored In-Context Learning (ICL) as a strategy to guide the LLMgenerative process with task-specific prompt examples. However, ICL introducesinconveniences, such as the need for designing contextually relevant promptsand the absence of learning task-specific parameters, thereby limitingdownstream task performance. In this context, we foresee Parameter-EfficientFine-Tuning (PEFT) techniques as a promising approach to efficiently specializeLLMs to task-specific data while maintaining reasonable resource consumption.In this paper, we deliver a comprehensive study of PEFT techniques for LLMsunder the automated code generation scenario. Our comprehensive investigationof PEFT techniques for LLMs reveals their superiority and potential over ICLacross a diverse set of LLMs. Additionally, we demonstrate the extendedcapabilities of PEFT, showcasing its ability to learn from two distinctdatasets jointly without compromising performance. Furthermore, our studyhighlights the potential for tuning larger LLMs and significant reductions inmemory usage by combining PEFT with quantization. Therefore, this study opensopportunities for broader applications of PEFT in software engineeringscenarios. Our code is available athttps://github.com/martin-wey/peft-llm-code/.",,arXiv,"['cs.se', 'cs.cl', 'cs.lg']",, causal intersectionality and dual form of gradient descent for multimodal analysis a case study on hateful memes,"['Yosuke Miyanishi', 'Minh Le Nguyen']",http://arxiv.org/pdf/2308.11585v1.pdf,2023-08-19,," In the wake of the explosive growth of machine learning (ML) usage,particularly within the context of emerging Large Language Models (LLMs),comprehending the semantic significance rooted in their internal workings iscrucial. While causal analyses focus on defining semantics and itsquantification, the gradient-based approach is central to explainable AI (XAI),tackling the interpretation of the black box. By synergizing these approaches,the exploration of how a model's internal mechanisms illuminate its causaleffect has become integral for evidence-based decision-making. A parallel lineof research has revealed that intersectionality - the combinatory impact ofmultiple demographics of an individual - can be structured in the form of anAveraged Treatment Effect (ATE). Initially, this study illustrates that thehateful memes detection problem can be formulated as an ATE, assisted by theprinciples of intersectionality, and that a modality-wise summarization ofgradient-based attention attribution scores can delineate the distinctbehaviors of three Transformerbased models concerning ATE. Subsequently, weshow that the latest LLM LLaMA2 has the ability to disentangle theintersectional nature of memes detection in an in-context learning setting,with their mechanistic properties elucidated via meta-gradient, a secondaryform of gradient. In conclusion, this research contributes to the ongoingdialogue surrounding XAI and the multifaceted nature of ML models.",,arXiv,"['cs.ai', 'cs.cl']",, empowering dynamicsaware texttovideo diffusion with large language models,"['Hao Fei', 'Shengqiong Wu', 'Wei Ji', 'Hanwang Zhang', 'Tat-Seng Chua']",http://arxiv.org/pdf/2308.13812v1.pdf,2023-08-26,," Text-to-video (T2V) synthesis has gained increasing attention in thecommunity, in which the recently emerged diffusion models (DMs) havepromisingly shown stronger performance than the past approaches. While existingstate-of-the-art DMs are competent to achieve high-resolution video generation,they may largely suffer from key limitations (e.g., action occurrencedisorders, crude video motions) with respect to the intricate temporal dynamicsmodeling, one of the crux of video synthesis. In this work, we investigatestrengthening the awareness of video dynamics for DMs, for high-quality T2Vgeneration. Inspired by human intuition, we design an innovative dynamic scenemanager (dubbed as Dysen) module, which includes (step-1) extracting from inputtext the key actions with proper time-order arrangement, (step-2) transformingthe action schedules into the dynamic scene graph (DSG) representations, and(step-3) enriching the scenes in the DSG with sufficient and reasonabledetails. Taking advantage of the existing powerful LLMs (e.g., ChatGPT) viain-context learning, Dysen realizes (nearly) human-level temporal dynamicsunderstanding. Finally, the resulting video DSG with rich action scene detailsis encoded as fine-grained spatio-temporal features, integrated into thebackbone T2V DM for video generating. Experiments on popular T2V datasetssuggest that our framework consistently outperforms prior arts with significantmargins, especially in the scenario with complex actions. Project page athttps://haofei.vip/Dysen-VDM",,arXiv,"['cs.ai', 'cs.cv']",, identifying and mitigating the security risks of generative ai,"['Clark Barrett', 'Brad Boyd', 'Elie Burzstein', 'Nicholas Carlini', 'Brad Chen', 'Jihye Choi', 'Amrita Roy Chowdhury', 'Mihai Christodorescu', 'Anupam Datta', 'Soheil Feizi', 'Kathleen Fisher', 'Tatsunori Hashimoto', 'Dan Hendrycks', 'Somesh Jha', 'Daniel Kang', 'Florian Kerschbaum', 'Eric Mitchell', 'John Mitchell', 'Zulfikar Ramzan', 'Khawaja Shams', 'Dawn Song', 'Ankur Taly', 'Diyi Yang']",http://arxiv.org/pdf/2308.14840v4.pdf,2023-08-28,," Every major technical invention resurfaces the dual-use dilemma -- the newtechnology has the potential to be used for good as well as for harm.Generative AI (GenAI) techniques, such as large language models (LLMs) anddiffusion models, have shown remarkable capabilities (e.g., in-contextlearning, code-completion, and text-to-image generation and editing). However,GenAI can be used just as well by attackers to generate new attacks andincrease the velocity and efficacy of existing attacks. This paper reports the findings of a workshop held at Google (co-organized byStanford University and the University of Wisconsin-Madison) on the dual-usedilemma posed by GenAI. This paper is not meant to be comprehensive, but israther an attempt to synthesize some of the interesting findings from theworkshop. We discuss short-term and long-term goals for the community on thistopic. We hope this paper provides both a launching point for a discussion onthis important topic as well as interesting problems that the researchcommunity can work to address.",,arXiv,['cs.ai'],, business process text sketch automation generation using large language model,"['Rui Zhu', 'Quanzhou Hu', 'Wenxin Li', 'Honghao Xiao', 'Chaogang Wang', 'Zixin Zhou']",http://arxiv.org/pdf/2309.01071v1.pdf,2023-09-03,," Business Process Management (BPM) is gaining increasing attention as it hasthe potential to cut costs while boosting output and quality. Business processdocument generation is a crucial stage in BPM. However, due to a shortage ofdatasets, data-driven deep learning techniques struggle to deliver the expectedresults. We propose an approach to transform Conditional Process Trees (CPTs)into Business Process Text Sketches (BPTSs) using Large Language Models (LLMs).The traditional prompting approach (Few-shot In-Context Learning) tries to getthe correct answer in one go, and it can find the pattern of transformingsimple CPTs into BPTSs, but for close-domain and CPTs with complex hierarchy,the traditional prompts perform weakly and with low correctness. We suggestusing this technique to break down a difficult CPT into a number of basic CPTsand then solve each one in turn, drawing inspiration from thedivide-and-conquer strategy. We chose 100 process trees with depths rangingfrom 2 to 5 at random, as well as CPTs with many nodes, many degrees ofselection, and cyclic nesting. Experiments show that our method can achieve acorrect rate of 93.42%, which is 45.17% better than traditional promptingmethods. Our proposed method provides a solution for business process documentgeneration in the absence of datasets, and secondly, it becomes potentiallypossible to provide a large number of datasets for the process model extraction(PME) domain.",,arXiv,['cs.cl'],, textbooks are all you need ii phi15 technical report,"['Yuanzhi Li', 'Sébastien Bubeck', 'Ronen Eldan', 'Allie Del Giorno', 'Suriya Gunasekar', 'Yin Tat Lee']",http://arxiv.org/pdf/2309.05463v1.pdf,2023-09-11,," We continue the investigation into the power of smaller Transformer-basedlanguage models as initiated by \textbf{TinyStories} -- a 10 million parametermodel that can produce coherent English -- and the follow-up work on\textbf{phi-1}, a 1.3 billion parameter model with Python coding performanceclose to the state-of-the-art. The latter work proposed to use existing LargeLanguage Models (LLMs) to generate ``textbook quality"" data as a way to enhancethe learning process compared to traditional web data. We follow the``Textbooks Are All You Need"" approach, focusing this time on common sensereasoning in natural language, and create a new 1.3 billion parameter modelnamed \textbf{phi-1.5}, with performance on natural language tasks comparableto models 5x larger, and surpassing most non-frontier LLMs on more complexreasoning tasks such as grade-school mathematics and basic coding. Moregenerally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,both good -- such as the ability to ``think step by step"" or perform somerudimentary in-context learning -- and bad, including hallucinations and thepotential for toxic and biased generations -- encouragingly though, we areseeing improvement on that front thanks to the absence of web data. Weopen-source \textbf{phi-1.5} to promote further research on these urgenttopics.",,arXiv,"['cs.cl', 'cs.ai']",, uncovering mesaoptimization algorithms in transformers,"['Johannes von Oswald', 'Eyvind Niklasson', 'Maximilian Schlegel', 'Seijin Kobayashi', 'Nicolas Zucchet', 'Nino Scherrer', 'Nolan Miller', 'Mark Sandler', 'Blaise Agüera y Arcas', 'Max Vladymyrov', 'Razvan Pascanu', 'João Sacramento']",http://arxiv.org/pdf/2309.05858v1.pdf,2023-09-11,," Transformers have become the dominant model in deep learning, but the reasonfor their superior performance is poorly understood. Here, we hypothesize thatthe strong performance of Transformers stems from an architectural bias towardsmesa-optimization, a learned process running within the forward pass of a modelconsisting of the following two steps: (i) the construction of an internallearning objective, and (ii) its corresponding solution found throughoptimization. To test this hypothesis, we reverse-engineer a series ofautoregressive Transformers trained on simple sequence modeling tasks,uncovering underlying gradient-based mesa-optimization algorithms driving thegeneration of predictions. Moreover, we show that the learned forward-passoptimization algorithm can be immediately repurposed to solve supervisedfew-shot tasks, suggesting that mesa-optimization might underlie the in-contextlearning capabilities of large language models. Finally, we propose a novelself-attention layer, the mesa-layer, that explicitly and efficiently solvesoptimization problems specified in context. We find that this layer can lead toimproved performance in synthetic and preliminary language modelingexperiments, adding weight to our hypothesis that mesa-optimization is animportant operation hidden within the weights of trained Transformers.",,arXiv,"['cs.lg', 'cs.ai']",, narrowing the gap between supervised and unsupervised sentence representation learning with large language model,"['Mingxin Li', 'Richong Zhang', 'Zhijie Nie', 'Yongyi Mao']",http://arxiv.org/pdf/2309.06453v2.pdf,2023-09-12,," Sentence Representation Learning (SRL) is a fundamental task in NaturalLanguage Processing (NLP), with the Contrastive Learning of Sentence Embeddings(CSE) being the mainstream technique due to its superior performance. Anintriguing phenomenon in CSE is the significant performance gap betweensupervised and unsupervised methods, with their only difference lying in thetraining data. Previous works attribute this performance gap to differences intwo representation properties (alignment and uniformity). However, sincealignment and uniformity only measure the results, they fail to answer ""Whataspects of the training data contribute to the performance gap?"" and ""How canthe performance gap be narrowed?"", In this paper, we conduct empiricalexperiments to answer these ""What"" and ""How"" questions. We first answer the""What"" question by thoroughly comparing the behavior of supervised andunsupervised CSE during their respective training processes. From thecomparison, we identify the similarity pattern as a key factor to theperformance gap, and introduce a metric, called Relative Fitting Difficulty(RFD), to measure the complexity of the similarity pattern. Then, based on theinsights gained from the ""What"" question, we tackle the ""How"" question byincreasing the pattern complexity of the training data. We achieve this byleveraging the In-Context Learning (ICL) capability of the Large Language Model(LLM) to generate data that simulates complex patterns. By utilizing thehierarchical patterns in the LLM-generated data, we effectively narrow the gapbetween supervised and unsupervised CSE. We release our codes and appendix athttps://github.com/BDBC-KG-NLP/NGCSE.",,arXiv,"['cs.cl', 'cs.lg']",, gpt4aigchip towards nextgeneration ai accelerator design automation via large language models,"['Yonggan Fu', 'Yongan Zhang', 'Zhongzhi Yu', 'Sixu Li', 'Zhifan Ye', 'Chaojian Li', 'Cheng Wan', 'Yingyan Lin']",http://arxiv.org/pdf/2309.10730v1.pdf,2023-09-19,," The remarkable capabilities and intricate nature of Artificial Intelligence(AI) have dramatically escalated the imperative for specialized AIaccelerators. Nonetheless, designing these accelerators for various AIworkloads remains both labor- and time-intensive. While existing designexploration and automation tools can partially alleviate the need for extensivehuman involvement, they still demand substantial hardware expertise, posing abarrier to non-experts and stifling AI accelerator development. Motivated bythe astonishing potential of large language models (LLMs) for generatinghigh-quality content in response to human language instructions, we embark onthis work to examine the possibility of harnessing LLMs to automate AIaccelerator design. Through this endeavor, we develop GPT4AIGChip, a frameworkintended to democratize AI accelerator design by leveraging human naturallanguages instead of domain-specific languages. Specifically, we first performan in-depth investigation into LLMs' limitations and capabilities for AIaccelerator design, thus aiding our understanding of our current position andgarnering insights into LLM-powered automated AI accelerator design.Furthermore, drawing inspiration from the above insights, we develop aframework called GPT4AIGChip, which features an automated demo-augmentedprompt-generation pipeline utilizing in-context learning to guide LLMs towardscreating high-quality AI accelerator design. To our knowledge, this work is thefirst to demonstrate an effective pipeline for LLM-powered automated AIaccelerator generation. Accordingly, we anticipate that our insights andframework can serve as a catalyst for innovations in next-generationLLM-powered design automation tools.",,arXiv,"['cs.lg', 'cs.ar']",, a benchmark for learning to translate a new language from one grammar book,"['Garrett Tanzer', 'Mirac Suzgun', 'Eline Visser', 'Dan Jurafsky', 'Luke Melas-Kyriazi']",http://arxiv.org/pdf/2309.16575v2.pdf,2023-09-28,," Large language models (LLMs) can perform impressive feats with in-contextlearning or lightweight finetuning. It is natural to wonder how well thesemodels adapt to genuinely new tasks, but how does one find tasks that areunseen in internet-scale training sets? We turn to a field that is explicitlymotivated and bottlenecked by a scarcity of web data: low-resource languages.In this paper, we introduce MTOB (Machine Translation from One Book), abenchmark for learning to translate between English and Kalamang -- a languagewith less than 200 speakers and therefore virtually no presence on the web --using several hundred pages of field linguistics reference materials. This taskframing is novel in that it asks a model to learn a language from a singlehuman-readable book of grammar explanations, rather than a large mined corpusof in-domain data, more akin to L2 learning than L1 acquisition. We demonstratethat baselines using current LLMs are promising but fall short of humanperformance, achieving 44.7 chrF on Kalamang to English translation and 45.8chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by ahuman who learned Kalamang from the same reference materials. We hope that MTOBwill help measure LLM capabilities along a new dimension, and that the methodsdeveloped to solve it could help expand access to language technology forunderserved communities by leveraging qualitatively different kinds of datathan traditional machine translation.",,arXiv,['cs.cl'],, benchmarking cognitive biases in large language models as evaluators,"['Ryan Koo', 'Minhwa Lee', 'Vipul Raheja', 'Jong Inn Park', 'Zae Myung Kim', 'Dongyeop Kang']",http://arxiv.org/pdf/2309.17012v1.pdf,2023-09-29,," Large Language Models (LLMs) have recently been shown to be effective asautomatic evaluators with simple prompting and in-context learning. In thiswork, we assemble 15 LLMs of four different size ranges and evaluate theiroutput responses by preference ranking from the other LLMs as evaluators, suchas System Star is better than System Square. We then evaluate the quality ofranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators(CoBBLEr), a benchmark to measure six different cognitive biases in LLMevaluation outputs, such as the Egocentric bias where a model prefers to rankits own outputs highly in evaluation. We find that LLMs are biased text qualityevaluators, exhibiting strong indications on our bias benchmark (average of 40%of comparisons across all models) within each of their evaluations thatquestion their robustness as evaluators. Furthermore, we examine thecorrelation between human and machine preferences and calculate the averageRank-Biased Overlap (RBO) score to be 49.6%, indicating that machinepreferences are misaligned with humans. According to our findings, LLMs maystill be unable to be utilized for automatic annotation aligned with humanpreferences. Our project page is at: https://minnesotanlp.github.io/cobbler.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, fewertoken neural speech codec with timeinvariant codes,"['Yong Ren', 'Tao Wang', 'Jiangyan Yi', 'Le Xu', 'Jianhua Tao', 'Chuyuan Zhang', 'Junzuo Zhou']",http://arxiv.org/pdf/2310.00014v1.pdf,2023-09-15,," Language model based text-to-speech (TTS) models, like VALL-E, have gainedattention for their outstanding in-context learning capability in zero-shotscenarios. Neural speech codec is a critical component of these models, whichcan convert speech into discrete token representations. However, excessivetoken sequences from the codec may negatively affect prediction accuracy andrestrict the progression of Language model based TTS models. To address thisissue, this paper proposes a novel neural speech codec with time-invariantcodes named TiCodec. By encoding and quantizing time-invariant information intoa separate code, TiCodec can reduce the amount of frame-level information thatneeds encoding, effectively decreasing the number of tokens as codes of speech.Furthermore, this paper introduces a time-invariant encoding consistency lossto enhance the consistency of time-invariant code within an utterance and forceit to capture more global information, which can benefit the zero-shot TTStask. Experimental results demonstrate that TiCodec can not only enhance thequality of reconstruction speech with fewer tokens but also increase thesimilarity and naturalness, as well as reduce the word error rate of thesynthesized speech by the TTS model.",,arXiv,"['cs.sd', 'eess.as']",, reactable enhancing react for table question answering,"['Yunjia Zhang', 'Jordan Henkel', 'Avrilia Floratou', 'Joyce Cahoon', 'Shaleen Deep', 'Jignesh M. Patel']",http://arxiv.org/pdf/2310.00815v1.pdf,2023-10-01,," Table Question Answering (TQA) presents a substantial challenge at theintersection of natural language processing and data analytics. This taskinvolves answering natural language (NL) questions on top of tabular data,demanding proficiency in logical reasoning, understanding of data semantics,and fundamental analytical capabilities. Due to its significance, a substantialvolume of research has been dedicated to exploring a wide range of strategiesaimed at tackling this challenge including approaches that leverage LargeLanguage Models (LLMs) through in-context learning or Chain-of-Thought (CoT)prompting as well as approaches that train and fine-tune custom models. Nonetheless, a conspicuous gap exists in the research landscape, where thereis limited exploration of how innovative foundational research, whichintegrates incremental reasoning with external tools in the context of LLMs, asexemplified by the ReAct paradigm, could potentially bring advantages to theTQA task. In this paper, we aim to fill this gap, by introducing ReAcTable(ReAct for Table Question Answering tasks), a framework inspired by the ReActparadigm that is carefully enhanced to address the challenges uniquelyappearing in TQA tasks such as interpreting complex data semantics, dealingwith errors generated by inconsistent data and generating intricate datatransformations. ReAcTable relies on external tools such as SQL and Python codeexecutors, to progressively enhance the data by generating intermediate datarepresentations, ultimately transforming it into a more accessible format foranswering the questions with greater ease. We demonstrate that ReAcTableachieves remarkable performance even when compared to fine-tuned approaches. Inparticular, it outperforms the best prior result on the WikiTQ benchmark,achieving an accuracy of 68.0% without requiring training a new model orfine-tuning.",,arXiv,['cs.db'],, graphtext graph reasoning in text space,"['Jianan Zhao', 'Le Zhuo', 'Yikang Shen', 'Meng Qu', 'Kai Liu', 'Michael Bronstein', 'Zhaocheng Zhu', 'Jian Tang']",http://arxiv.org/pdf/2310.01089v1.pdf,2023-10-02,," Large Language Models (LLMs) have gained the ability to assimilate humanknowledge and facilitate natural language interactions with both humans andother LLMs. However, despite their impressive achievements, LLMs have not madesignificant advancements in the realm of graph machine learning. Thislimitation arises because graphs encapsulate distinct relational data, makingit challenging to transform them into natural language that LLMs understand. Inthis paper, we bridge this gap with a novel framework, GraphText, thattranslates graphs into natural language. GraphText derives a graph-syntax treefor each graph that encapsulates both the node attributes and inter-noderelationships. Traversal of the tree yields a graph text sequence, which isthen processed by an LLM to treat graph tasks as text generation tasks.Notably, GraphText offers multiple advantages. It introduces training-freegraph reasoning: even without training on graph data, GraphText with ChatGPTcan achieve on par with, or even surpassing, the performance ofsupervised-trained graph neural networks through in-context learning (ICL).Furthermore, GraphText paves the way for interactive graph reasoning, allowingboth humans and LLMs to communicate with the model seamlessly using naturallanguage. These capabilities underscore the vast, yet-to-be-explored potentialof LLMs in the domain of graph machine learning.",,arXiv,"['cs.cl', 'cs.lg']",, lightweight incontext tuning for multimodal unified models,"['Yixin Chen', 'Shuai Zhang', 'Boran Han', 'Jiaya Jia']",http://arxiv.org/pdf/2310.05109v1.pdf,2023-10-08,," In-context learning (ICL) involves reasoning from given contextual examples.As more modalities comes, this procedure is becoming more challenging as theinterleaved input modalities convolutes the understanding process. This isexemplified by the observation that multimodal models often struggle toeffectively extrapolate from contextual examples to perform ICL. To addressthese challenges, we introduce MultiModal In-conteXt Tuning (M$^2$IXT), alightweight module to enhance the ICL capabilities of multimodal unifiedmodels. The proposed M$^2$IXT module perceives an expandable context window toincorporate various labeled examples of multiple modalities (e.g., text, image,and coordinates). It can be prepended to various multimodal unified models(e.g., OFA, Unival, LLaVA) of different architectures and trained via amixed-tasks strategy to enable rapid few-shot adaption on multiple tasks anddatasets. When tuned on as little as 50K multimodal data, M$^2$IXT can boostthe few-shot ICL performance significantly (e.g., 18\% relative increase forOFA), and obtained state-of-the-art results across an array of tasks includingvisual question answering, image captioning, visual grounding, and visualentailment, while being considerably small in terms of model parameters (e.g.,$\sim$$20\times$ smaller than Flamingo or MMICL), highlighting the flexibilityand effectiveness of M$^2$IXT as a multimodal in-context learner.",,arXiv,['cs.cv'],, explainable claim verification via knowledgegrounded reasoning with large language models,"['Haoran Wang', 'Kai Shu']",http://arxiv.org/pdf/2310.05253v2.pdf,2023-10-08,," Claim verification plays a crucial role in combating misinformation. Whileexisting works on claim verification have shown promising results, a crucialpiece of the puzzle that remains unsolved is to understand how to verify claimswithout relying on human-annotated data, which is expensive to create at alarge scale. Additionally, it is important for models to provide comprehensiveexplanations that can justify their decisions and assist human fact-checkers.This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK)Reasoning that can verify complex claims and generate explanations without theneed for annotated evidence using Large Language Models (LLMs). FOLK leveragesthe in-context learning ability of LLMs to translate the claim into aFirst-Order-Logic (FOL) clause consisting of predicates, each corresponding toa sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoningover a set of knowledge-grounded question-and-answer pairs to make veracitypredictions and generate explanations to justify its decision-making process.This process makes our model highly explanatory, providing clear explanationsof its reasoning process in human-readable form. Our experiment resultsindicate that FOLK outperforms strong baselines on three datasets encompassingvarious claim verification challenges. Our code and data are available.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, glitter or gold deriving structured insights from sustainability reports via large language models,"['Marco Bronzini', 'Carlo Nicolini', 'Bruno Lepri', 'Andrea Passerini', 'Jacopo Staiano']",http://arxiv.org/pdf/2310.05628v3.pdf,2023-10-09,," Over the last decade, several regulatory bodies have started requiring thedisclosure of non-financial information from publicly listed companies, inlight of the investors' increasing attention to Environmental, Social, andGovernance (ESG) issues. Publicly released information on sustainabilitypractices is often disclosed in diverse, unstructured, and multi-modaldocumentation. This poses a challenge in efficiently gathering and aligning thedata into a unified framework to derive insights related to Corporate SocialResponsibility (CSR). Thus, using Information Extraction (IE) methods becomesan intuitive choice for delivering insightful and actionable data tostakeholders. In this study, we employ Large Language Models (LLMs), In-ContextLearning, and the Retrieval-Augmented Generation (RAG) paradigm to extractstructured insights related to ESG aspects from companies' sustainabilityreports. We then leverage graph-based representations to conduct statisticalanalyses concerning the extracted insights. These analyses revealed that ESGcriteria cover a wide range of topics, exceeding 500, often beyond thoseconsidered in existing categorizations, and are addressed by companies througha variety of initiatives. Moreover, disclosure similarities emerged amongcompanies from the same region or sector, validating ongoing hypotheses in theESG literature. Lastly, by incorporating additional company attributes into ouranalyses, we investigated which factors impact the most on companies' ESGratings, showing that ESG disclosure affects the obtained ratings more thanother financial or company data.",,arXiv,"['cs.cl', 'cs.ce', 'cs.cy']",, are large language models post hoc explainers,"['Nicholas Kroeger', 'Dan Ley', 'Satyapriya Krishna', 'Chirag Agarwal', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2310.05797v2.pdf,2023-10-09,," Large Language Models (LLMs) are increasingly used as powerful tools for aplethora of natural language processing (NLP) applications. A recentinnovation, in-context learning (ICL), enables LLMs to learn new tasks bysupplying a few examples in the prompt during inference time, therebyeliminating the need for model fine-tuning. While LLMs have been utilized inseveral applications, their applicability in explaining the behavior of othermodels remains relatively unexplored. Despite the growing number of newexplanation techniques, many require white-box access to the model and/or arecomputationally expensive, highlighting a need for next-generation post hocexplainers. In this work, we present the first framework to study theeffectiveness of LLMs in explaining other predictive models. More specifically,we propose a novel framework encompassing multiple prompting strategies: i)Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL,and iv) Explanation-based ICL, with varying levels of information about theunderlying ML model and the local neighborhood of the test sample. We conductextensive experiments with real-world benchmark datasets to demonstrate thatLLM-generated explanations perform on par with state-of-the-art post hocexplainers using their ability to leverage ICL examples and their internalknowledge in generating model explanations. On average, across four datasetsand two ML models, we observe that LLMs identify the most important featurewith 72.19% accuracy, opening up new frontiers in explainable artificialintelligence (XAI) to explore LLM-based explanation frameworks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, salmon selfalignment with principlefollowing reward models,"['Zhiqing Sun', 'Yikang Shen', 'Hongxin Zhang', 'Qinhong Zhou', 'Zhenfang Chen', 'David Cox', 'Yiming Yang', 'Chuang Gan']",http://arxiv.org/pdf/2310.05910v1.pdf,2023-10-09,," Supervised Fine-Tuning (SFT) on response demonstrations combined withReinforcement Learning from Human Feedback (RLHF) constitutes a powerfulparadigm for aligning LLM-based AI agents. However, a significant limitation ofsuch an approach is its dependency on high-quality human annotations, makingits application to intricate tasks challenging due to difficulties in obtainingconsistent response demonstrations and in-distribution response preferences.This paper presents a novel approach, namely SALMON (Self-ALignMent withprinciple-fOllowiNg reward models), to align base language models with minimalhuman supervision, using only a small set of human-defined principles, yetachieving superior performance. Central to our approach is aprinciple-following reward model. Trained on synthetic preference data, thismodel can generate reward scores based on arbitrary human-defined principles.By merely adjusting these principles during the RL training phase, we gain fullcontrol over the preferences with the reward model, subsequently influencingthe behavior of the RL-trained policies, and eliminating the reliance on thecollection of online human preferences. Applying our method to the LLaMA-2-70bbase language model, we developed an AI assistant named Dromedary-2. With only6 exemplars for in-context learning and 31 human-defined principles,Dromedary-2 significantly surpasses the performance of several state-of-the-artAI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We haveopen-sourced the code and model weights to encourage further research intoaligning LLM-based AI agents with enhanced supervision efficiency, improvedcontrollability, and scalable oversight.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, eipetext evaluationguided iterative plan extraction for longform narrative text generation,"['Wang You', 'Wenshan Wu', 'Yaobo Liang', 'Shaoguang Mao', 'Chenfei Wu', 'Maosong Cao', 'Yuzhe Cai', 'Yiduo Guo', 'Yan Xia', 'Furu Wei', 'Nan Duan']",http://arxiv.org/pdf/2310.08185v1.pdf,2023-10-12,," Plan-and-Write is a common hierarchical approach in long-form narrative textgeneration, which first creates a plan to guide the narrative writing.Following this approach, several studies rely on simply prompting largelanguage models for planning, which often yields suboptimal results. In thispaper, we propose a new framework called Evaluation-guided Iterative PlanExtraction for long-form narrative text generation (EIPE-text), which extractsplans from the corpus of narratives and utilizes the extracted plans toconstruct a better planner. EIPE-text has three stages: plan extraction,learning, and inference. In the plan extraction stage, it iteratively extractsand improves plans from the narrative corpus and constructs a plan corpus. Wepropose a question answer (QA) based evaluation mechanism to automaticallyevaluate the plans and generate detailed plan refinement instructions to guidethe iterative improvement. In the learning stage, we build a better planner byfine-tuning with the plan corpus or in-context learning with examples in theplan corpus. Finally, we leverage a hierarchical approach to generate long-formnarratives. We evaluate the effectiveness of EIPE-text in the domains of novelsand storytelling. Both GPT-4-based evaluations and human evaluationsdemonstrate that our method can generate more coherent and relevant long-formnarratives. Our code will be released in the future.",,arXiv,"['cs.cl', 'cs.ai']",, prompting large language models with chainofthought for fewshot knowledge base question generation,"['Yuanyuan Liang', 'Jianing Wang', 'Hanlun Zhu', 'Lei Wang', 'Weining Qian', 'Yunshi Lan']",http://arxiv.org/pdf/2310.08395v3.pdf,2023-10-12,," The task of Question Generation over Knowledge Bases (KBQG) aims to convert alogical form into a natural language question. For the sake of expensive costof large-scale question annotation, the methods of KBQG under low-resourcescenarios urgently need to be developed. However, current methods heavily relyon annotated data for fine-tuning, which is not well-suited for few-shotquestion generation. The emergence of Large Language Models (LLMs) has showntheir impressive generalization ability in few-shot tasks. Inspired byChain-of-Thought (CoT) prompting, which is an in-context learning strategy forreasoning, we formulate KBQG task as a reasoning problem, where the generationof a complete question is splitted into a series of sub-question generation.Our proposed prompting method KQG-CoT first retrieves supportive logical formsfrom the unlabeled data pool taking account of the characteristics of thelogical form. Then, we write a prompt to explicit the reasoning chain ofgenerating complicated questions based on the selected demonstrations. Tofurther ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting thelogical forms by their complexity. We conduct extensive experiments over threepublic KBQG datasets. The results demonstrate that our prompting methodconsistently outperforms other prompting baselines on the evaluated datasets.Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results ofthe PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4,METEOR, and ROUGE-L, respectively.",,arXiv,"['cs.cl', 'cs.ai']",, mastering robot manipulation with multimodal prompts through pretraining and multitask finetuning,"['Jiachen Li', 'Qiaozi Gao', 'Michael Johnston', 'Xiaofeng Gao', 'Xuehai He', 'Suhaila Shakiah', 'Hangjie Shi', 'Reza Ghanadan', 'William Yang Wang']",http://arxiv.org/pdf/2310.09676v1.pdf,2023-10-14,," Prompt-based learning has been demonstrated as a compelling paradigmcontributing to large language models' tremendous success (LLMs). Inspired bytheir success in language tasks, existing research has leveraged LLMs inembodied instruction following and task planning. However, not much attentionhas been paid to embodied tasks with multimodal prompts, combining visionsignals with text descriptions. This type of task poses a major challenge torobots' capability to understand the interconnection and complementaritybetween vision and language signals. In this work, we introduce an effectiveframework that learns a policy to perform robot manipulation with multimodalprompts from multi-task expert trajectories. Our methods consist of a two-stagetraining pipeline that performs inverse dynamics pretraining and multi-taskfinetuning. To facilitate multimodal understanding, we design our multimodalprompt encoder by augmenting a pretrained LM with a residual connection to thevisual input and model the dependencies among action dimensions. Empirically,we evaluate the efficacy of our method on the VIMA-BENCH and establish a newstate-of-the-art (10% improvement in success rate). Moreover, we demonstratethat our model exhibits remarkable in-context learning ability.",,arXiv,"['cs.ro', 'cs.ai']",, unifying image processing as visual prompting question answering,"['Yihao Liu', 'Xiangyu Chen', 'Xianzheng Ma', 'Xintao Wang', 'Jiantao Zhou', 'Yu Qiao', 'Chao Dong']",http://arxiv.org/pdf/2310.10513v1.pdf,2023-10-16,," Image processing is a fundamental task in computer vision, which aims atenhancing image quality and extracting essential features for subsequent visionapplications. Traditionally, task-specific models are developed for individualtasks and designing such models requires distinct expertise. Building upon thesuccess of large language models (LLMs) in natural language processing (NLP),there is a similar trend in computer vision, which focuses on developinglarge-scale models through pretraining and in-context learning. This paradigmshift reduces the reliance on task-specific models, yielding a powerful unifiedmodel to deal with various tasks. However, these advances have predominantlyconcentrated on high-level vision tasks, with less attention paid to low-levelvision tasks. To address this issue, we propose a universal model for generalimage processing that covers image restoration, image enhancement, imagefeature extraction tasks, \textit{etc}. Our proposed framework, namedPromptGIP, unifies these diverse image processing tasks within a universalframework. Inspired by NLP question answering (QA) techniques, we employ avisual prompting question answering paradigm. Specifically, we treat theinput-output image pair as a structured question-answer sentence, therebyreprogramming the image processing task as a prompting QA problem. PromptGIPcan undertake diverse \textbf{cross-domain} tasks using provided visualprompts, eliminating the need for task-specific finetuning. Our methodologyoffers a universal and adaptive solution to general image processing. WhilePromptGIP has demonstrated a certain degree of out-of-domain taskgeneralization capability, further research is expected to fully explore itsmore powerful emergent generalization.",,arXiv,"['cs.cv', 'eess.iv']",, eureka humanlevel reward design via coding large language models,"['Yecheng Jason Ma', 'William Liang', 'Guanzhi Wang', 'De-An Huang', 'Osbert Bastani', 'Dinesh Jayaraman', 'Yuke Zhu', 'Linxi Fan', 'Anima Anandkumar']",http://arxiv.org/pdf/2310.12931v1.pdf,2023-10-19,," Large Language Models (LLMs) have excelled as high-level semantic plannersfor sequential decision-making tasks. However, harnessing them to learn complexlow-level manipulation tasks, such as dexterous pen spinning, remains an openproblem. We bridge this fundamental gap and present Eureka, a human-levelreward design algorithm powered by LLMs. Eureka exploits the remarkablezero-shot generation, code-writing, and in-context improvement capabilities ofstate-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization overreward code. The resulting rewards can then be used to acquire complex skillsvia reinforcement learning. Without any task-specific prompting or pre-definedreward templates, Eureka generates reward functions that outperform experthuman-engineered rewards. In a diverse suite of 29 open-source RL environmentsthat include 10 distinct robot morphologies, Eureka outperforms human expertson 83% of the tasks, leading to an average normalized improvement of 52%. Thegenerality of Eureka also enables a new gradient-free in-context learningapproach to reinforcement learning from human feedback (RLHF), readilyincorporating human inputs to improve the quality and the safety of thegenerated rewards without model updating. Finally, using Eureka rewards in acurriculum learning setting, we demonstrate for the first time, a simulatedShadow Hand capable of performing pen spinning tricks, adeptly manipulating apen in circles at rapid speed.",,arXiv,"['cs.ro', 'cs.ai', 'cs.lg']",, selfprompted chainofthought on large language models for opendomain multihop reasoning,"['Jinyuan Wang', 'Junlong Li', 'Hai Zhao']",http://arxiv.org/pdf/2310.13552v2.pdf,2023-10-20,," In open-domain question-answering (ODQA), most existing questions requiresingle-hop reasoning on commonsense. To further extend this task, we officiallyintroduce open-domain multi-hop reasoning (ODMR) by answering multi-hopquestions with explicit reasoning steps in open-domain setting. Recently, largelanguage models (LLMs) have found significant utility in facilitating ODQAwithout external corpus. Furthermore, chain-of-thought (CoT) prompting booststhe reasoning capability of LLMs to a greater extent with manual or automatedparadigms. However, existing automated methods lack of quality assurance, whilemanual approaches suffer from limited scalability and poor diversity, hinderingthe capabilities of LLMs. In this paper, we propose Self-promptedChain-of-Thought (SP-CoT), an automated framework to mass-produce high qualityCoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generationpipeline of high quality ODMR datasets, an adaptive sampler for in-context CoTselection and self-prompted inference via in-context learning. Extensiveexperiments on four multi-hop question-answering benchmarks show that ourproposed SP-CoT not only significantly surpasses the previous SOTA methods onlarge-scale (175B) LLMs, but also nearly doubles the zero-shot performance ofsmall-scale (13B) LLMs. Further analysis reveals the remarkable capability ofSP-CoT to elicit direct and concise intermediate reasoning steps by recalling$\sim$50\% of intermediate answers on MuSiQue-Ans dataset.",,arXiv,"['cs.cl', 'cs.ai']",, investigating the fairness of large language models for predictions on tabular data,"['Yanchen Liu', 'Srishti Gautam', 'Jiaqi Ma', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2310.14607v1.pdf,2023-10-23,," Recent literature has suggested the potential of using large language models(LLMs) to make predictions for tabular tasks. However, LLMs have been shown toexhibit harmful social biases that reflect the stereotypes and inequalitiespresent in the society. To this end, as well as the widespread use of tabulardata in many high-stake applications, it is imperative to explore the followingquestions: what sources of information do LLMs draw upon when makingpredictions for tabular tasks; whether and to what extent are LLM predictionsfor tabular tasks influenced by social biases and stereotypes; and what are theconsequential implications for fairness? Through a series of experiments, wedelve into these questions and show that LLMs tend to inherit social biasesfrom their training data which significantly impact their fairness in tabularprediction tasks. Furthermore, our investigations show that in the context ofbias mitigation, though in-context learning and fine-tuning have a moderateeffect, the fairness metric gap between different subgroups is still largerthan that in traditional machine learning models, such as Random Forest andshallow Neural Networks. This observation emphasizes that the social biases areinherent within the LLMs themselves and inherited from their pre-trainingcorpus, not only from the downstream task datasets. Besides, we demonstratethat label-flipping of in-context examples can significantly reduce biases,further highlighting the presence of inherent bias within LLMs.",,arXiv,"['cs.cl', 'cs.lg']",, large language models are visual reasoning coordinators,"['Liangyu Chen', 'Bo Li', 'Sheng Shen', 'Jingkang Yang', 'Chunyuan Li', 'Kurt Keutzer', 'Trevor Darrell', 'Ziwei Liu']",http://arxiv.org/pdf/2310.15166v1.pdf,2023-10-23,," Visual reasoning requires multimodal perception and commonsense cognition ofthe world. Recently, multiple vision-language models (VLMs) have been proposedwith excellent commonsense reasoning ability in various domains. However, howto harness the collective power of these complementary VLMs is rarely explored.Existing methods like ensemble still struggle to aggregate these models withthe desired higher-order communications. In this work, we propose Cola, a novelparadigm that coordinates multiple VLMs for visual reasoning. Our key insightis that a large language model (LLM) can efficiently coordinate multiple VLMsby facilitating natural language communication that leverages their distinctand complementary capabilities. Extensive experiments demonstrate that ourinstruction tuning variant, Cola-FT, achieves state-of-the-art performance onvisual question answering (VQA), outside knowledge VQA, visual entailment, andvisual spatial reasoning tasks. Moreover, we show that our in-context learningvariant, Cola-Zero, exhibits competitive performance in zero and few-shotsettings, without finetuning. Through systematic ablation studies andvisualizations, we validate that a coordinator LLM indeed comprehends theinstruction prompts as well as the separate functionalities of VLMs; it thencoordinates them to enable impressive visual reasoning capabilities.",,arXiv,"['cs.cv', 'cs.cl']",, function vectors in large language models,"['Eric Todd', 'Millicent L. Li', 'Arnab Sen Sharma', 'Aaron Mueller', 'Byron C. Wallace', 'David Bau']",http://arxiv.org/pdf/2310.15213v1.pdf,2023-10-23,," We report the presence of a simple neural mechanism that represents aninput-output function as a vector within autoregressive transformer languagemodels (LMs). Using causal mediation analysis on a diverse range ofin-context-learning (ICL) tasks, we find that a small number attention headstransport a compact representation of the demonstrated task, which we call afunction vector (FV). FVs are robust to changes in context, i.e., they triggerexecution of the task on inputs such as zero-shot and natural text settingsthat do not resemble the ICL contexts from which they are collected. We testFVs across a range of tasks, models, and layers and find strong causal effectsacross settings in middle layers. We investigate the internal structure of FVsand find while that they often contain information that encodes the outputspace of the function, this information alone is not sufficient to reconstructan FV. Finally, we test semantic vector composition in FVs, and find that tosome extent they can be summed to create vectors that trigger new complextasks. Taken together, our findings suggest that LLMs contain internalabstractions of general-purpose functions that can be invoked in a variety ofcontexts.",,arXiv,"['cs.cl', 'cs.lg']",, tcrallm token compression retrieval augmented large language model for inference cost reduction,"['Junyi Liu', 'Liangzhi Li', 'Tong Xiang', 'Bowen Wang', 'Yiming Qian']",http://arxiv.org/pdf/2310.15556v2.pdf,2023-10-24,," Since ChatGPT released its API for public use, the number of applicationsbuilt on top of commercial large language models (LLMs) increase exponentially.One popular usage of such models is leveraging its in-context learning abilityand generating responses given user queries leveraging knowledge obtained byretrieval augmentation. One problem of deploying commercial retrieval-augmentedLLMs is the cost due to the additionally retrieved context that largelyincreases the input token size of the LLMs. To mitigate this, we propose atoken compression scheme that includes two methods: summarization compressionand semantic compression. The first method applies a T5-based model that isfine-tuned by datasets generated using self-instruct containing samples withvarying lengths and reduce token size by doing summarization. The second methodfurther compresses the token size by removing words with lower impact on thesemantic. In order to adequately evaluate the effectiveness of the proposedmethods, we propose and utilize a dataset called Food-Recommendation DB (FRDB)focusing on food recommendation for women around pregnancy period or infants.Our summarization compression can reduce 65% of the retrieval token size withfurther 0.3% improvement on the accuracy; semantic compression provides a moreflexible way to trade-off the token size with performance, for which we canreduce the token size by 20% with only 1.6% of accuracy drop.",,arXiv,"['cs.cl', 'cs.ir']",, testing the limits unusual text inputs generation for mobile app crash detection with large language model,"['Zhe Liu', 'Chunyang Chen', 'Junjie Wang', 'Mengzhuo Chen', 'Boyu Wu', 'Xing Che', 'Dandan Wang', 'Qing Wang']",http://arxiv.org/pdf/2310.15657v1.pdf,2023-10-24,," Mobile applications have become a ubiquitous part of our daily life,providing users with access to various services and utilities. Text input, asan important interaction channel between users and applications, plays animportant role in core functionality such as search queries, authentication,messaging, etc. However, certain special text (e.g., -18 for Font Size) cancause the app to crash, and generating diversified unusual inputs for fullytesting the app is highly demanded. Nevertheless, this is also challenging dueto the combination of explosion dilemma, high context sensitivity, and complexconstraint relations. This paper proposes InputBlaster which leverages the LLMto automatically generate unusual text inputs for mobile app crash detection.It formulates the unusual inputs generation problem as a task of producing aset of test generators, each of which can yield a batch of unusual text inputsunder the same mutation rule. In detail, InputBlaster leverages LLM to producethe test generators together with the mutation rules serving as the reasoningchain, and utilizes the in-context learning schema to demonstrate the LLM withexamples for boosting the performance. InputBlaster is evaluated on 36 textinput widgets with cash bugs involving 31 popular Android apps, and resultsshow that it achieves 78% bug detection rate, with 136% higher than the bestbaseline. Besides, we integrate it with the automated GUI testing tool anddetect 37 unseen crashes in real-world apps from Google Play.",,arXiv,['cs.se'],, unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving,"['Zhan Ling', 'Yunhao Fang', 'Xuanlin Li', 'Tongzhou Mu', 'Mingu Lee', 'Reza Pourreza', 'Roland Memisevic', 'Hao Su']",http://arxiv.org/pdf/2311.00694v2.pdf,2023-11-01,," Large Language Models (LLMs) have achieved tremendous progress, yet theystill often struggle with challenging reasoning problems. Current approachesaddress this challenge by sampling or searching detailed and low-levelreasoning chains. However, these methods are still limited in their explorationcapabilities, making it challenging for correct solutions to stand out in thehuge solution space. In this work, we unleash LLMs' creative potential forexploring multiple diverse problem solving strategies by framing an LLM as ahierarchical policy via in-context learning. This policy comprises of avisionary leader that proposes multiple diverse high-level problem-solvingtactics as hints, accompanied by a follower that executes detailedproblem-solving processes following each of the high-level instruction. Thefollower uses each of the leader's directives as a guide and samples multiplereasoning chains to tackle the problem, generating a solution group for eachleader proposal. Additionally, we propose an effective and efficienttournament-based approach to select among these explored solution groups toreach the final answer. Our approach produces meaningful and inspiring hints,enhances problem-solving strategy exploration, and improves the final answeraccuracy on challenging problems in the MATH dataset. Code will be released athttps://github.com/lz1oceani/LLM-As-Hierarchical-Policy.",,arXiv,"['cs.ai', 'cs.cl']",, sentiment analysis through llm negotiations,"['Xiaofei Sun', 'Xiaoya Li', 'Shengyu Zhang', 'Shuhe Wang', 'Fei Wu', 'Jiwei Li', 'Tianwei Zhang', 'Guoyin Wang']",http://arxiv.org/pdf/2311.01876v1.pdf,2023-11-03,," A standard paradigm for sentiment analysis is to rely on a singular LLM andmakes the decision in a single round under the framework of in-contextlearning. This framework suffers the key disadvantage that the single-turnoutput generated by a single LLM might not deliver the perfect decision, justas humans sometimes need multiple attempts to get things right. This isespecially true for the task of sentiment analysis where deep reasoning isrequired to address the complex linguistic phenomenon (e.g., clausecomposition, irony, etc) in the input. To address this issue, this paper introduces a multi-LLM negotiationframework for sentiment analysis. The framework consists of a reasoning-infusedgenerator to provide decision along with rationale, a explanation-derivingdiscriminator to evaluate the credibility of the generator. The generator andthe discriminator iterate until a consensus is reached. The proposed frameworknaturally addressed the aforementioned challenge, as we are able to take thecomplementary abilities of two LLMs, have them use rationale to persuade eachother for correction. Experiments on a wide range of sentiment analysis benchmarks (SST-2, MovieReview, Twitter, yelp, amazon, IMDB) demonstrate the effectiveness of proposedapproach: it consistently yields better performances than the ICL baselineacross all benchmarks, and even superior performances to supervised baselineson the Twitter and movie review datasets.",,arXiv,['cs.cl'],, chef a comprehensive evaluation framework for standardized assessment of multimodal large language models,"['Zhelun Shi', 'Zhipin Wang', 'Hongxing Fan', 'Zhenfei Yin', 'Lu Sheng', 'Yu Qiao', 'Jing Shao']",http://arxiv.org/pdf/2311.02692v1.pdf,2023-11-05,," Multimodal Large Language Models (MLLMs) have shown impressive abilities ininteracting with visual content with myriad potential downstream tasks.However, even though a list of benchmarks has been proposed, the capabilitiesand limitations of MLLMs are still not comprehensively understood, due to alack of a standardized and holistic evaluation framework. To this end, wepresent the first Comprehensive Evaluation Framework (ChEF) that canholistically profile each MLLM and fairly compare different MLLMs. First, westructure ChEF as four modular components, i.e., Scenario as scalablemultimodal datasets, Instruction as flexible instruction retrieving formulae,Inferencer as reliable question answering strategies, and Metric as indicativetask-specific score functions. Based on them, ChEF facilitates versatileevaluations in a standardized framework, and new evaluations can be built bydesigning new Recipes (systematic selection of these four components). Notably,current MLLM benchmarks can be readily summarized as recipes of ChEF. Second,we introduce 6 new recipes to quantify competent MLLMs' desired capabilities(or called desiderata, i.e., calibration, in-context learning, instructionfollowing, language performance, hallucination, and robustness) as reliableagents that can perform real-world multimodal interactions. Third, we conduct alarge-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata.Our evaluation summarized over 20 valuable observations concerning thegeneralizability of MLLMs across various scenarios and the composite capabilityof MLLMs required for multimodal interactions. We will publicly release all thedetailed implementations for further analysis, as well as an easy-to-usemodular toolkit for the integration of new recipes and models, so that ChEF canbe a growing evaluation framework for the MLLM community.",,arXiv,['cs.cv'],, kinematicaware prompting for generalizable articulated object manipulation with llms,"['Wenke Xia', 'Dong Wang', 'Xincheng Pang', 'Zhigang Wang', 'Bin Zhao', 'Di Hu']",http://arxiv.org/pdf/2311.02847v2.pdf,2023-11-06,," Generalizable articulated object manipulation is essential for home-assistantrobots. Recent efforts focus on imitation learning from demonstrations orreinforcement learning in simulation, however, due to the prohibitive costs ofreal-world data collection and precise object simulation, it still remainschallenging for these works to achieve broad adaptability across diversearticulated objects. Recently, many works have tried to utilize the strongin-context learning ability of Large Language Models (LLMs) to achievegeneralizable robotic manipulation, but most of these researches focus onhigh-level task planning, sidelining low-level robotic control. In this work,building on the idea that the kinematic structure of the object determines howwe can manipulate it, we propose a kinematic-aware prompting framework thatprompts LLMs with kinematic knowledge of objects to generate low-level motiontrajectory waypoints, supporting various object manipulation. To effectivelyprompt LLMs with the kinematic structure of different objects, we design aunified kinematic knowledge parser, which represents various articulatedobjects as a unified textual description containing kinematic joints andcontact location. Building upon this unified description, a kinematic-awareplanner model is proposed to generate precise 3D manipulation waypoints via adesigned kinematic-aware chain-of-thoughts prompting method. Our evaluationspanned 48 instances across 16 distinct categories, revealing that ourframework not only outperforms traditional methods on 8 seen categories butalso shows a powerful zero-shot capability for 8 unseen articulated objectcategories. Moreover, the real-world experiments on 7 different objectcategories prove our framework's adaptability in practical scenarios. Code isreleased at\href{https://github.com/GeWu-Lab/LLM_articulated_object_manipulation/tree/main}{here}.",,arXiv,"['cs.ro', 'cs.ai']",, incontext learning for knowledge base question answering for unmanned systems based on large language models,"['Yunlong Chen', 'Yaming Zhang', 'Jianfei Yu', 'Li Yang', 'Rui Xia']",http://arxiv.org/pdf/2311.02956v1.pdf,2023-11-06,," Knowledge Base Question Answering (KBQA) aims to answer factoid questionsbased on knowledge bases. However, generating the most appropriate knowledgebase query code based on Natural Language Questions (NLQ) poses a significantchallenge in KBQA. In this work, we focus on the CCKS2023 Competition ofQuestion Answering with Knowledge Graph Inference for Unmanned Systems.Inspired by the recent success of large language models (LLMs) like ChatGPT andGPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL)generation framework to generate the most appropriate CQL based on the givenNLQ. Our generative framework contains six parts: an auxiliary model predictingthe syntax-related information of CQL based on the given NLQ, a proper nounmatcher extracting proper nouns from the given NLQ, a demonstration exampleselector retrieving similar examples of the input sample, a prompt constructordesigning the input template of ChatGPT, a ChatGPT-based generation modelgenerating the CQL, and an ensemble model to obtain the final answers fromdiversified outputs. With our ChatGPT-based CQL generation framework, weachieved the second place in the CCKS 2023 Question Answering with KnowledgeGraph Inference for Unmanned Systems competition, achieving an F1-score of0.92676.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, retrievalaugmented code generation for universal information extraction,"['Yucan Guo', 'Zixuan Li', 'Xiaolong Jin', 'Yantao Liu', 'Yutao Zeng', 'Wenxuan Liu', 'Xiang Li', 'Pan Yang', 'Long Bai', 'Jiafeng Guo', 'Xueqi Cheng']",http://arxiv.org/pdf/2311.02962v1.pdf,2023-11-06,," Information Extraction (IE) aims to extract structural knowledge (e.g.,entities, relations, events) from natural language texts, which bringschallenges to existing methods due to task-specific schemas and complex textexpressions. Code, as a typical kind of formalized language, is capable ofdescribing structural knowledge under various schemas in a universal way. Onthe other hand, Large Language Models (LLMs) trained on both codes and textshave demonstrated powerful capabilities of transforming texts into codes, whichprovides a feasible solution to IE tasks. Therefore, in this paper, we proposea universal retrieval-augmented code generation framework based on LLMs, calledCode4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to definetask-specific schemas of various structural knowledge in a universal way. By sodoing, extracting knowledge under these schemas can be transformed intogenerating codes that instantiate the predefined Python classes with theinformation in texts. To generate these codes more precisely, Code4UIE adoptsthe in-context learning mechanism to instruct LLMs with examples. In order toobtain appropriate examples for different tasks, Code4UIE explores severalexample retrieval strategies, which can retrieve examples semantically similarto the given texts. Extensive experiments on five representative IE tasksacross nine datasets demonstrate the effectiveness of the Code4UIE framework.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ir']",, unified lowresource sequence labeling by sampleaware dynamic sparse finetuning,"['Sarkar Snigdha Sarathi Das', 'Ranran Haoran Zhang', 'Peng Shi', 'Wenpeng Yin', 'Rui Zhang']",http://arxiv.org/pdf/2311.03748v1.pdf,2023-11-07,," Unified Sequence Labeling that articulates different sequence labelingproblems such as Named Entity Recognition, Relation Extraction, Semantic RoleLabeling, etc. in a generalized sequence-to-sequence format opens up theopportunity to make the maximum utilization of large language model knowledgetoward structured prediction. Unfortunately, this requires formatting them intospecialized augmented format unknown to the base pretrained language model(PLMs) necessitating finetuning to the target format. This significantly boundsits usefulness in data-limited settings where finetuning large models cannotproperly generalize to the target format. To address this challenge andleverage PLM knowledge effectively, we propose FISH-DIP, a sample-aware dynamicsparse finetuning strategy that selectively focuses on a fraction ofparameters, informed by feedback from highly regressing examples, during thefine-tuning process. By leveraging the dynamism of sparsity, our approachmitigates the impact of well-learned samples and prioritizes underperforminginstances for improvement in generalization. Across five tasks of sequencelabeling, we demonstrate that FISH-DIP can smoothly optimize the model in lowresource settings offering upto 40% performance improvements over fullfine-tuning depending on target evaluation settings. Also, compared toin-context learning and other parameter-efficient fine-tuning approaches,FISH-DIP performs comparably or better, notably in extreme low-resourcesettings.",,arXiv,['cs.cl'],, ul2 unifying language learning paradigms,"['Yi Tay', 'Mostafa Dehghani', 'Vinh Q. Tran', 'Xavier Garcia', 'Jason Wei', 'Xuezhi Wang', 'Hyung Won Chung', 'Siamak Shakeri', 'Dara Bahri', 'Tal Schuster', 'Huaixiu Steven Zheng', 'Denny Zhou', 'Neil Houlsby', 'Donald Metzler']",http://arxiv.org/pdf/2205.05131v3.pdf,2022-05-10,," Existing pre-trained models are generally geared towards a particular classof problems. To date, there seems to be still no consensus on what the rightarchitecture and pre-training setup should be. This paper presents a unifiedframework for pre-training models that are universally effective acrossdatasets and setups. We begin by disentangling architectural archetypes withpre-training objectives -- two concepts that are commonly conflated. Next, wepresent a generalized & unified perspective for self-supervision in NLP andshow how different pre-training objectives can be cast as one another and howinterpolating between different objectives can be effective. We then proposeMixture-of-Denoisers (MoD), a pre-training objective that combines diversepre-training paradigms together. We furthermore introduce a notion of modeswitching, wherein downstream fine-tuning is associated with specificpre-training schemes. We conduct extensive ablative experiments to comparemultiple pre-training objectives and find that our method pushes thePareto-frontier by outperforming T5 & GPT-like models across multiple diversesetups. By scaling our model up to 20B parameters, we achieve SOTA performanceon 50 well-established supervised finetuning based NLP tasks. Our model alsoachieve strong results at in-context learning, outperforming 175B GPT-3 onzero-shot SuperGLUE and tripling the performance of T5-XXL on one-shotsummarization. On 0-shot MMLU, UL2 20B outperforms T0 and T5 models. UL2 20Balso works well with chain-of-thought prompting and reasoning, making it anappealing choice for research into reasoning at a small to medium scale of 20Bparameters. Finally, we apply FLAN instruction tuning to the UL2 20B model,achieving MMLU and Big-Bench scores competitive to FLAN-PaLM 62B. We releaseFlax-based T5X checkpoints for the UL2 20B & Flan-UL2 20B.",,arXiv,['cs.cl'],, humantimescale adaptation in an openended task space,"[' Adaptive Agent Team', 'Jakob Bauer', 'Kate Baumli', 'Satinder Baveja', 'Feryal Behbahani', 'Avishkar Bhoopchand', 'Nathalie Bradley-Schmieg', 'Michael Chang', 'Natalie Clay', 'Adrian Collister', 'Vibhavari Dasagi', 'Lucy Gonzalez', 'Karol Gregor', 'Edward Hughes', 'Sheleem Kashem', 'Maria Loks-Thompson', 'Hannah Openshaw', 'Jack Parker-Holder', 'Shreya Pathak', 'Nicolas Perez-Nieves', 'Nemanja Rakicevic', 'Tim Rocktäschel', 'Yannick Schroecker', 'Jakub Sygnowski', 'Karl Tuyls', 'Sarah York', 'Alexander Zacherl', 'Lei Zhang']",http://arxiv.org/pdf/2301.07608v1.pdf,2023-01-18,," Foundation models have shown impressive adaptation and scalability insupervised and self-supervised learning problems, but so far these successeshave not fully translated to reinforcement learning (RL). In this work, wedemonstrate that training an RL agent at scale leads to a general in-contextlearning algorithm that can adapt to open-ended novel embodied 3D problems asquickly as humans. In a vast space of held-out environment dynamics, ouradaptive agent (AdA) displays on-the-fly hypothesis-driven exploration,efficient exploitation of acquired knowledge, and can successfully be promptedwith first-person demonstrations. Adaptation emerges from three ingredients:(1) meta-reinforcement learning across a vast, smooth and diverse taskdistribution, (2) a policy parameterised as a large-scale attention-basedmemory architecture, and (3) an effective automated curriculum that prioritisestasks at the frontier of an agent's capabilities. We demonstrate characteristicscaling laws with respect to network size, memory length, and richness of thetraining task distribution. We believe our results lay the foundation forincreasingly general and adaptive RL agents that perform well acrossever-larger open-ended domains.",,arXiv,"['cs.lg', 'cs.ai', 'cs.ne']",, deidgpt zeroshot medical text deidentification by gpt4,"['Zhengliang Liu', 'Yue Huang', 'Xiaowei Yu', 'Lu Zhang', 'Zihao Wu', 'Chao Cao', 'Haixing Dai', 'Lin Zhao', 'Yiwei Li', 'Peng Shu', 'Fang Zeng', 'Lichao Sun', 'Wei Liu', 'Dinggang Shen', 'Quanzheng Li', 'Tianming Liu', 'Dajiang Zhu', 'Xiang Li']",http://arxiv.org/pdf/2303.11032v2.pdf,2023-03-20,," The digitization of healthcare has facilitated the sharing and re-using ofmedical data but has also raised concerns about confidentiality and privacy.HIPAA (Health Insurance Portability and Accountability Act) mandates removingre-identifying information before the dissemination of medical records. Thus,effective and efficient solutions for de-identifying medical data, especiallythose in free-text forms, are highly needed. While various computer-assistedde-identification methods, including both rule-based and learning-based, havebeen developed and used in prior practice, such solutions still lackgeneralizability or need to be fine-tuned according to different scenarios,significantly imposing restrictions in wider use. The advancement of largelanguage models (LLM), such as ChatGPT and GPT-4, have shown great potential inprocessing text data in the medical domain with zero-shot in-context learning,especially in the task of privacy protection, as these models can identifyconfidential information by their powerful named entity recognition (NER)capability. In this work, we developed a novel GPT4-enabled de-identificationframework (``DeID-GPT"") to automatically identify and remove the identifyinginformation. Compared to existing commonly used medical text datade-identification methods, our developed DeID-GPT showed the highest accuracyand remarkable reliability in masking private information from the unstructuredmedical text while preserving the original structure and meaning of the text.This study is one of the earliest to utilize ChatGPT and GPT-4 for medical textdata processing and de-identification, which provides insights for furtherresearch and solution development on the use of LLMs such as ChatGPT/GPT-4 inhealthcare. Codes and benchmarking data information are available athttps://github.com/yhydhx/ChatGPT-API.",,arXiv,"['cs.cl', 'cs.cy']",, taskmatrixai completing tasks by connecting foundation models with millions of apis,"['Yaobo Liang', 'Chenfei Wu', 'Ting Song', 'Wenshan Wu', 'Yan Xia', 'Yu Liu', 'Yang Ou', 'Shuai Lu', 'Lei Ji', 'Shaoguang Mao', 'Yun Wang', 'Linjun Shou', 'Ming Gong', 'Nan Duan']",http://arxiv.org/pdf/2303.16434v1.pdf,2023-03-29,," Artificial Intelligence (AI) has made incredible progress recently. On theone hand, advanced foundation models like ChatGPT can offer powerfulconversation, in-context learning and code generation abilities on a broadrange of open-domain tasks. They can also generate high-level solution outlinesfor domain-specific tasks based on the common sense knowledge they haveacquired. However, they still face difficulties with some specialized tasksbecause they lack enough domain-specific data during pre-training or they oftenhave errors in their neural network computations on those tasks that needaccurate executions. On the other hand, there are also many existing models andsystems (symbolic-based or neural-based) that can do some domain-specific tasksvery well. However, due to the different implementation or working mechanisms,they are not easily accessible or compatible with foundation models. Therefore,there is a clear and pressing need for a mechanism that can leverage foundationmodels to propose task solution outlines and then automatically match some ofthe sub-tasks in the outlines to the off-the-shelf models and systems withspecial functionalities to complete them. Inspired by this, we introduceTaskMatrix.AI as a new AI ecosystem that connects foundation models withmillions of APIs for task completion. Unlike most previous work that aimed toimprove a single AI model, TaskMatrix.AI focuses more on using existingfoundation models (as a brain-like central system) and APIs of other AI modelsand systems (as sub-task solvers) to achieve diversified tasks in both digitaland physical domains. As a position paper, we will present our vision of how tobuild such an ecosystem, explain each key component, and use study cases toillustrate both the feasibility of this vision and the main challenges we needto address next.",,arXiv,"['cs.ai', 'cs.cl']",, subjectdriven texttoimage generation via apprenticeship learning,"['Wenhu Chen', 'Hexiang Hu', 'Yandong Li', 'Nataniel Ruiz', 'Xuhui Jia', 'Ming-Wei Chang', 'William W. Cohen']",http://arxiv.org/pdf/2304.00186v5.pdf,2023-04-01,," Recent text-to-image generation models like DreamBooth have made remarkableprogress in generating highly customized images of a target subject, byfine-tuning an ``expert model'' for a given subject from a few examples.However, this process is expensive, since a new expert model must be learnedfor each subject. In this paper, we present SuTI, a Subject-drivenText-to-Image generator that replaces subject-specific fine tuning within-context learning. Given a few demonstrations of a new subject, SuTI caninstantly generate novel renditions of the subject in different scenes, withoutany subject-specific optimization. SuTI is powered by apprenticeship learning,where a single apprentice model is learned from data generated by a massivenumber of subject-specific expert models. Specifically, we mine millions ofimage clusters from the Internet, each centered around a specific visualsubject. We adopt these clusters to train a massive number of expert models,each specializing in a different subject. The apprentice model SuTI then learnsto imitate the behavior of these fine-tuned experts. SuTI can generatehigh-quality and customized subject-specific images 20x faster thanoptimization-based SoTA methods. On the challenging DreamBench andDreamBench-v2, our human evaluation shows that SuTI significantly outperformsexisting models like InstructPix2Pix, Textual Inversion, Imagic, Prompt2Prompt,Re-Imagen and DreamBooth, especially on the subject and text alignment aspects.",,arXiv,"['cs.cv', 'cs.ai']",, large language models are edgecase fuzzers testing deep learning libraries via fuzzgpt,"['Yinlin Deng', 'Chunqiu Steven Xia', 'Chenyuan Yang', 'Shizhuo Dylan Zhang', 'Shujing Yang', 'Lingming Zhang']",http://arxiv.org/pdf/2304.02014v1.pdf,2023-04-04,," Deep Learning (DL) library bugs affect downstream DL applications,emphasizing the need for reliable systems. Generating valid input programs forfuzzing DL libraries is challenging due to the need for satisfying bothlanguage syntax/semantics and constraints for constructing valid computationalgraphs. Recently, the TitanFuzz work demonstrates that modern Large LanguageModels (LLMs) can be directly leveraged to implicitly learn all the constraintsto generate valid DL programs for fuzzing. However, LLMs tend to generateordinary programs following similar patterns seen in their massive trainingcorpora, while fuzzing favors unusual inputs that cover edge cases or areunlikely to be manually produced. To fill this gap, this paper proposes FuzzGPT, the first technique to primeLLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on thewell-known hypothesis that historical bug-triggering programs may includerare/valuable code ingredients important for bug finding. Traditionaltechniques leveraging such historical information require intensive humanefforts to design dedicated generators and ensure the validity of generatedprograms. FuzzGPT demonstrates that this process can be fully automated via theintrinsic capabilities of LLMs (including fine-tuning and in-context learning),while being generalizable and applicable to challenging domains. While FuzzGPTcan be applied with different LLMs, this paper focuses on the powerfulGPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potentialof directly leveraging the instruct-following capability of the recent ChatGPTfor effective fuzzing. Evaluation on two popular DL libraries (PyTorch andTensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz,detecting 76 bugs, with 49 already confirmed as previously unknown bugs,including 11 high-priority bugs or security vulnerabilities.",,arXiv,['cs.se'],, can language models solve graph problems in natural language,"['Heng Wang', 'Shangbin Feng', 'Tianxing He', 'Zhaoxuan Tan', 'Xiaochuang Han', 'Yulia Tsvetkov']",http://arxiv.org/pdf/2305.10037v3.pdf,2023-05-17,," Large language models (LLMs) are increasingly adopted for a variety of taskswith implicit graphical structures, such as planning in robotics, multi-hopquestion answering or knowledge probing, structured commonsense reasoning, andmore. While LLMs have advanced the state-of-the-art on these tasks withstructure implications, whether LLMs could explicitly process textualdescriptions of graphs and structures, map them to grounded conceptual spaces,and perform structured operations remains underexplored. To this end, wepropose NLGraph (Natural Language Graph), a comprehensive benchmark ofgraph-based problem solving designed in natural language. NLGraph contains29,370 problems, covering eight graph reasoning tasks with varying complexityfrom simple tasks such as connectivity and shortest path up to complex problemssuch as maximum flow and simulating graph neural networks. We evaluate LLMs(GPT-3/4) with various prompting approaches on the NLGraph benchmark and findthat 1) language models do demonstrate preliminary graph reasoning abilities,2) the benefit of advanced prompting and in-context learning diminishes on morecomplex graph problems, while 3) LLMs are also (un)surprisingly brittle in theface of spurious correlations in graph and problem settings. We then proposeBuild-a-Graph Prompting and Algorithmic Prompting, two instruction-basedapproaches to enhance LLMs in solving natural language graph problems.Build-a-Graph and Algorithmic prompting improve the performance of LLMs onNLGraph by 3.07% to 16.85% across multiple tasks and settings, while how tosolve the most complicated graph reasoning tasks in our setup with languagemodels remains an open research question. The NLGraph benchmark and evaluationcode are available at https://github.com/Arthur-Heng/NLGraph.",,arXiv,"['cs.cl', 'cs.ai']",, improving language model negotiation with selfplay and incontext learning from ai feedback,"['Yao Fu', 'Hao Peng', 'Tushar Khot', 'Mirella Lapata']",http://arxiv.org/pdf/2305.10142v1.pdf,2023-05-17,," We study whether multiple large language models (LLMs) can autonomouslyimprove each other in a negotiation game by playing, reflecting, andcriticizing. We are interested in this question because if LLMs were able toimprove each other, it would imply the possibility of creating strong AI agentswith minimal human intervention. We ask two LLMs to negotiate with each other,playing the roles of a buyer and a seller, respectively. They aim to reach adeal with the buyer targeting a lower price and the seller a higher one. Athird language model, playing the critic, provides feedback to a player toimprove the player's negotiation strategies. We let the two agents playmultiple rounds, using previous negotiation history and AI feedback asin-context demonstrations to improve the model's negotiation strategyiteratively. We use different LLMs (GPT and Claude) for different roles and usethe deal price as the evaluation metric. Our experiments reveal multipleintriguing findings: (1) Only a subset of the language models we consider canself-play and improve the deal price from AI feedback, weaker models either donot understand the game's rules or cannot incorporate AI feedback for furtherimprovement. (2) Models' abilities to learn from the feedback differ whenplaying different roles. For example, it is harder for Claude-instant toimprove as the buyer than as the seller. (3) When unrolling the game tomultiple rounds, stronger agents can consistently improve their performance bymeaningfully using previous experiences and iterative AI feedback, yet have ahigher risk of breaking the deal. We hope our work provides insightful initialexplorations of having models autonomously improve each other with game playingand AI feedback.",,arXiv,['cs.cl'],, xtremeup a usercentric scarcedata benchmark for underrepresented languages,"['Sebastian Ruder', 'Jonathan H. Clark', 'Alexander Gutkin', 'Mihir Kale', 'Min Ma', 'Massimo Nicosia', 'Shruti Rijhwani', 'Parker Riley', 'Jean-Michel A. Sarr', 'Xinyi Wang', 'John Wieting', 'Nitish Gupta', 'Anna Katanova', 'Christo Kirov', 'Dana L. Dickinson', 'Brian Roark', 'Bidisha Samanta', 'Connie Tao', 'David I. Adelani', 'Vera Axelrod', 'Isaac Caswell', 'Colin Cherry', 'Dan Garrette', 'Reeve Ingle', 'Melvin Johnson', 'Dmitry Panteleev', 'Partha Talukdar']",http://arxiv.org/pdf/2305.11938v2.pdf,2023-05-19,," Data scarcity is a crucial issue for the development of highly multilingualNLP systems. Yet for many under-represented languages (ULs) -- languages forwhich NLP re-search is particularly far behind in meeting user needs -- it isfeasible to annotate small amounts of data. Motivated by this, we proposeXTREME-UP, a benchmark defined by: its focus on the scarce-data scenario ratherthan zero-shot; its focus on user-centric tasks -- tasks with broad adoption byspeakers of high-resource languages; and its focus on under-representedlanguages where this scarce-data scenario tends to be most realistic. XTREME-UPevaluates the capabilities of language models across 88 under-representedlanguages over 9 key user-centric technologies including ASR, OCR, MT, andinformation access tasks that are of general utility. We create new datasetsfor OCR, autocomplete, semantic parsing, and transliteration, and build on andrefine existing datasets for other tasks. XTREME-UP provides methodology forevaluating many modeling scenarios including text-only, multi-modal (vision,audio, and text),supervised parameter tuning, and in-context learning. Weevaluate commonly used models on the benchmark. We release all code and scriptsto train and evaluate models",,arXiv,['cs.cl'],, palix on scaling up a multilingual vision and language model,"['Xi Chen', 'Josip Djolonga', 'Piotr Padlewski', 'Basil Mustafa', 'Soravit Changpinyo', 'Jialin Wu', 'Carlos Riquelme Ruiz', 'Sebastian Goodman', 'Xiao Wang', 'Yi Tay', 'Siamak Shakeri', 'Mostafa Dehghani', 'Daniel Salz', 'Mario Lucic', 'Michael Tschannen', 'Arsha Nagrani', 'Hexiang Hu', 'Mandar Joshi', 'Bo Pang', 'Ceslee Montgomery', 'Paulina Pietrzyk', 'Marvin Ritter', 'AJ Piergiovanni', 'Matthias Minderer', 'Filip Pavetic', 'Austin Waters', 'Gang Li', 'Ibrahim Alabdulmohsin', 'Lucas Beyer', 'Julien Amelot', 'Kenton Lee', 'Andreas Peter Steiner', 'Yang Li', 'Daniel Keysers', 'Anurag Arnab', 'Yuanzhong Xu', 'Keran Rong', 'Alexander Kolesnikov', 'Mojtaba Seyedhosseini', 'Anelia Angelova', 'Xiaohua Zhai', 'Neil Houlsby', 'Radu Soricut']",http://arxiv.org/pdf/2305.18565v1.pdf,2023-05-29,," We present the training recipe and results of scaling up PaLI-X, amultilingual vision and language model, both in terms of size of the componentsand the breadth of its training task mixture. Our model achieves new levels ofperformance on a wide-range of varied and complex tasks, including multipleimage-based captioning and question-answering tasks, image-based documentunderstanding and few-shot (in-context) learning, as well as object detection,video question answering, and video captioning. PaLI-X advances thestate-of-the-art on most vision-and-language benchmarks considered (25+ ofthem). Finally, we observe emerging capabilities, such as complex counting andmultilingual object detection, tasks that are not explicitly in the trainingmix.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, instruction tuned models are quick learners,"['Himanshu Gupta', 'Saurabh Arjun Sawant', 'Swaroop Mishra', 'Mutsumi Nakamura', 'Arindam Mitra', 'Santosh Mashetty', 'Chitta Baral']",http://arxiv.org/pdf/2306.05539v1.pdf,2023-05-17,," Instruction tuning of language models has demonstrated the ability to enhancemodel generalization to unseen tasks via in-context learning using a fewexamples. However, typical supervised learning still requires a plethora ofdownstream training data for finetuning. Often in real-world situations, thereis a scarcity of data available for finetuning, falling somewhere between fewshot inference and fully supervised finetuning. In this work, we demonstratethe sample efficiency of instruction tuned models over various tasks byestimating the minimal downstream training data required by them to performtransfer learning and match the performance of state-of-the-art (SOTA)supervised models. We conduct experiments on 119 tasks from Super NaturalInstructions (SuperNI) in both the single task learning (STL) and multi tasklearning (MTL) settings. Our findings reveal that, in the STL setting,instruction tuned models equipped with 25% of the downstream train data surpassthe SOTA performance on the downstream tasks. In the MTL setting, aninstruction tuned model trained on only 6% of downstream training data achieveSOTA, while using 100% of the training data results in a 3.69% pointsimprovement (ROUGE-L 74.68) over the previous SOTA. We conduct an analysis onT5 vs Tk-Instruct by developing several baselines to demonstrate thatinstruction tuning aids in increasing both sample efficiency and transferlearning. Additionally, we observe a consistent ~4% performance increase inboth settings when pre-finetuning is performed with instructions. Finally, weconduct a categorical study and find that contrary to previous results, tasksin the question rewriting and title generation categories suffer frominstruction tuning.",,arXiv,['cs.cl'],, synapse trajectoryasexemplar prompting with memory for computer control,"['Longtao Zheng', 'Rundong Wang', 'Xinrun Wang', 'Bo An']",http://arxiv.org/pdf/2306.07863v3.pdf,2023-06-13,," Building agents with large language models (LLMs) for computer control is aburgeoning research area, where the agent receives computer states and performsactions to complete complex tasks. Previous computer agents have demonstratedthe benefits of in-context learning (ICL); however, their performance ishindered by several issues. First, the limited context length of LLMs andcomplex computer states restrict the number of exemplars, as a single webpagecan consume the entire context. Second, the exemplars in current methods, suchas high-level plans and multi-choice questions, cannot represent completetrajectories, leading to suboptimal performance in long-horizon tasks. Third,existing computer agents rely on task-specific exemplars and overlook thesimilarity among tasks, resulting in poor generalization to novel tasks. Toaddress these challenges, we introduce Synapse, a computer agent featuringthree key components: i) state abstraction, which filters out task-irrelevantinformation from raw states, allowing more exemplars within the limitedcontext, ii) trajectory-as-exemplar prompting, which prompts the LLM withcomplete trajectories of the abstracted states and actions to improvemulti-step decision-making, and iii) exemplar memory, which stores theembeddings of exemplars and retrieves them via similarity search forgeneralization to novel tasks. We evaluate Synapse on MiniWoB++, a standardtask suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapseachieves a 99.2% average success rate (a 10% relative improvement) across 64tasks using demonstrations from only 48 tasks. Notably, Synapse is the firstICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a56% relative improvement in average step success rate over the previousstate-of-the-art prompting scheme in Mind2Web.",,arXiv,['cs.ai'],, language to rewards for robotic skill synthesis,"['Wenhao Yu', 'Nimrod Gileadi', 'Chuyuan Fu', 'Sean Kirmani', 'Kuang-Huei Lee', 'Montse Gonzalez Arenas', 'Hao-Tien Lewis Chiang', 'Tom Erez', 'Leonard Hasenclever', 'Jan Humplik', 'Brian Ichter', 'Ted Xiao', 'Peng Xu', 'Andy Zeng', 'Tingnan Zhang', 'Nicolas Heess', 'Dorsa Sadigh', 'Jie Tan', 'Yuval Tassa', 'Fei Xia']",http://arxiv.org/pdf/2306.08647v2.pdf,2023-06-14,," Large language models (LLMs) have demonstrated exciting progress in acquiringdiverse new capabilities through in-context learning, ranging from logicalreasoning to code-writing. Robotics researchers have also explored using LLMsto advance the capabilities of robotic control. However, since low-level robotactions are hardware-dependent and underrepresented in LLM training corpora,existing efforts in applying LLMs to robotics have largely treated LLMs assemantic planners or relied on human-engineered control primitives to interfacewith the robot. On the other hand, reward functions are shown to be flexiblerepresentations that can be optimized for control policies to achieve diversetasks, while their semantic richness makes them suitable to be specified byLLMs. In this work, we introduce a new paradigm that harnesses this realizationby utilizing LLMs to define reward parameters that can be optimized andaccomplish variety of robotic tasks. Using reward as the intermediate interfacegenerated by LLMs, we can effectively bridge the gap between high-levellanguage instructions or corrections to low-level robot actions. Meanwhile,combining this with a real-time optimizer, MuJoCo MPC, empowers an interactivebehavior creation experience where users can immediately observe the resultsand provide feedback to the system. To systematically evaluate the performanceof our proposed method, we designed a total of 17 tasks for a simulatedquadruped robot and a dexterous manipulator robot. We demonstrate that ourproposed method reliably tackles 90% of the designed tasks, while a baselineusing primitive skills as the interface with Code-as-policies achieves 50% ofthe tasks. We further validated our method on a real robot arm where complexmanipulation skills such as non-prehensile pushing emerge through ourinteractive system.",,arXiv,"['cs.ro', 'cs.ai', 'cs.lg']",, generative type inference for python,"['Yun Peng', 'Chaozheng Wang', 'Wenxuan Wang', 'Cuiyun Gao', 'Michael R. Lyu']",http://arxiv.org/pdf/2307.09163v1.pdf,2023-07-18,," Python is a popular dynamic programming language, evidenced by its ranking asthe second most commonly used language on GitHub. However, its dynamic typesystem can lead to potential type errors, leading researchers to exploreautomatic type inference approaches for Python programs. The rule-based typeinference approaches can ensure the accuracy of predicted variable types, butthey suffer from low coverage problems. Supervised type inference approaches,while feature-agnostic, require large, high-quality annotated datasets and arelimited to pre-defined types. As zero-shot approaches, the cloze-styleapproaches reformulate the type inference problem into a fill-in-the-blankproblem. However, their performance is limited. This paper introduces TypeGen, a few-shot generative type inference approachthat incorporates static domain knowledge from static analysis. TypeGen createschain-of-thought (COT) prompts by translating the type inference steps ofstatic analysis into prompts based on the type dependency graphs (TDGs),enabling language models to learn from how static analysis infers types. Bycombining COT prompts with code slices and type hints, TypeGen constructsexample prompts from human annotations. TypeGen only requires very fewannotated examples to teach language models to generate similar COT prompts viain-context learning. Moreover, TypeGen enhances the interpretability of resultsthrough the use of the input-explanation-output strategy. Experiments show thatTypeGen outperforms the best baseline Type4Py by 10.0% for argument typeprediction and 22.5% in return value type prediction in terms of top-1 ExactMatch by using only five examples. Furthermore, TypeGen achieves substantialimprovements of 27% to 84% compared to the zero-shot performance of largelanguage models with parameter sizes ranging from 1.3B to 175B in terms oftop-1 Exact Match.",,arXiv,['cs.se'],, 2nd place winning solution for the cvpr2023 visual anomaly and novelty detection challenge multimodal prompting for datacentric anomaly detection,"['Yunkang Cao', 'Xiaohao Xu', 'Chen Sun', 'Yuqi Cheng', 'Liang Gao', 'Weiming Shen']",http://arxiv.org/pdf/2306.09067v2.pdf,2023-06-15,," This technical report introduces the winning solution of the team Segment AnyAnomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge.Going beyond uni-modal prompt, e.g., language prompt, we present a novelframework, i.e., Segment Any Anomaly + (SAA$+$), for zero-shot anomalysegmentation with multi-modal prompts for the regularization of cascaded modernfoundation models. Inspired by the great zero-shot generalization ability offoundation models like Segment Anything, we first explore their assembly (SAA)to leverage diverse multi-modal prior knowledge for anomaly localization.Subsequently, we further introduce multimodal prompts (SAA$+$) derived fromdomain expert knowledge and target image context to enable the non-parameteradaptation of foundation models to anomaly segmentation. The proposed SAA$+$model achieves state-of-the-art performance on several anomaly segmentationbenchmarks, including VisA and MVTec-AD, in the zero-shot setting. We willrelease the code of our winning solution for the CVPR2023 VAN.",,arXiv,['cs.cv'],, similarityaware multimodal prompt learning for fake news detection,"['Ye Jiang', 'Xiaomin Yu', 'Yimin Wang', 'Xiaoman Xu', 'Xingyi Song', 'Diana Maynard']",http://arxiv.org/pdf/2304.04187v3.pdf,2023-04-09,," The standard paradigm for fake news detection mainly utilizes textinformation to model the truthfulness of news. However, the discourse of onlinefake news is typically subtle and it requires expert knowledge to use textualinformation to debunk fake news. Recently, studies focusing on multimodal fakenews detection have outperformed text-only methods. Recent approaches utilizingthe pre-trained model to extract unimodal features, or fine-tuning thepre-trained model directly, have become a new paradigm for detecting fake news.Again, this paradigm either requires a large number of training instances, orupdates the entire set of pre-trained model parameters, making real-world fakenews detection impractical. Furthermore, traditional multimodal methods fusethe cross-modal features directly without considering that the uncorrelatedsemantic representation might inject noise into the multimodal features. Thispaper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE)framework. First, we incorporate prompt learning into multimodal fake newsdetection. Prompt learning, which only tunes prompts with a frozen languagemodel, can reduce memory usage significantly and achieve comparableperformances, compared with fine-tuning. We analyse three prompt templates witha soft verbalizer to detect fake news. In addition, we introduce thesimilarity-aware fusing method to adaptively fuse the intensity of multimodalrepresentation and mitigate the noise injection via uncorrelated cross-modalfeatures. For evaluation, SAMPLE surpasses the F1 and the accuracies ofprevious works on two benchmark multimodal datasets, demonstrating theeffectiveness of the proposed method in detecting fake news. In addition,SAMPLE also is superior to other approaches regardless of few-shot anddata-rich settings.",,arXiv,['cs.cl'],, multitask multimodal prompted training for interactive embodied task completion,"['Georgios Pantazopoulos', 'Malvina Nikandrou', 'Amit Parekh', 'Bhathiya Hemanthage', 'Arash Eshghi', 'Ioannis Konstas', 'Verena Rieser', 'Oliver Lemon', 'Alessandro Suglia']",http://arxiv.org/pdf/2311.04067v1.pdf,2023-11-07,," Interactive and embodied tasks pose at least two fundamental challenges toexisting Vision & Language (VL) models, including 1) grounding language intrajectories of actions and observations, and 2) referential disambiguation. Totackle these challenges, we propose an Embodied MultiModal Agent (EMMA): aunified encoder-decoder model that reasons over images and trajectories, andcasts action prediction as multimodal text generation. By unifying all tasks astext generation, EMMA learns a language of actions which facilitates transferacross tasks. Different to previous modular approaches with independentlytrained components, we use a single multitask model where each task contributesto goal completion. EMMA performs on par with similar models on several VLbenchmarks and sets a new state-of-the-art performance (36.81% success rate) onthe Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guidedagents in the Alexa Arena",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, parameterefficient tuning of largescale multimodal foundation model,"['Haixin Wang', 'Xinlong Yang', 'Jianlong Chang', 'Dian Jin', 'Jinan Sun', 'Shikun Zhang', 'Xiao Luo', 'Qi Tian']",http://arxiv.org/pdf/2305.08381v3.pdf,2023-05-15,," Driven by the progress of large-scale pre-training, parameter-efficienttransfer learning has gained immense popularity across different subfields ofArtificial Intelligence. The core is to adapt the model to downstream taskswith only a small set of parameters. Recently, researchers have leveraged suchproven techniques in multimodal tasks and achieve promising results. However,two critical issues remain unresolved: how to further reduce the complexitywith lightweight design and how to boost alignment between modalities underextremely low parameters. In this paper, we propose A graceful prompt frameworkfor cross-modal transfer (Aurora) to overcome these challenges. Considering theredundancy in existing architectures, we first utilize the mode approximationto generate 0.1M trainable parameters to implement the multimodal prompttuning, which explores the low intrinsic dimension with only 0.04% parametersof the pre-trained model. Then, for better modality alignment, we propose theInformative Context Enhancement and Gated Query Transformation module underextremely few parameters scenes. A thorough evaluation on six cross-modalbenchmarks shows that it not only outperforms the state-of-the-art but evenoutperforms the full fine-tuning approach. Our code is available at:https://github.com/WillDreamer/Aurora.",,arXiv,['cs.cv'],, reframing instructional prompts to gptk's language,"['Swaroop Mishra', 'Daniel Khashabi', 'Chitta Baral', 'Yejin Choi', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2109.07830v3.pdf,2021-09-16,," What kinds of instructional prompts are easier to follow for Language Models(LMs)? We study this question by conducting extensive empirical analysis thatshed light on important features of successful instructional prompts.Specifically, we study several classes of reframing techniques for manualreformulation of prompts into more effective ones. Some examples includedecomposing a complex task instruction into multiple simpler tasks or itemizinginstructions into sequential steps. Our experiments compare the zero-shot andfew-shot performance of LMs prompted with reframed instructions on 12 NLP tasksacross 6 categories. Compared with original instructions, our reframedinstructions lead to significant improvements across LMs with different sizes.For example, the same reframed prompts boost few-shot performance ofGPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over alltasks. Furthermore, reframed instructions reduce the number of examplesrequired to prompt LMs in the few-shot setting. We hope theseempirically-driven techniques will pave the way towards more effective futureprompting algorithms.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, red teaming language model detectors with language models,"['Zhouxing Shi', 'Yihan Wang', 'Fan Yin', 'Xiangning Chen', 'Kai-Wei Chang', 'Cho-Jui Hsieh']",http://arxiv.org/pdf/2305.19713v2.pdf,2023-05-31,," The prevalence and strong capability of large language models (LLMs) presentsignificant safety and ethical risks if exploited by malicious users. Toprevent the potentially deceptive usage of LLMs, recent works have proposedalgorithms to detect LLM-generated text and protect LLMs. In this paper, weinvestigate the robustness and reliability of these LLM detectors underadversarial attacks. We study two types of attack strategies: 1) replacingcertain words in an LLM's output with their synonyms given the context; 2)automatically searching for an instructional prompt to alter the writing styleof the generation. In both strategies, we leverage an auxiliary LLM to generatethe word replacements or the instructional prompt. Different from previousworks, we consider a challenging setting where the auxiliary LLM can also beprotected by a detector. Experiments reveal that our attacks effectivelycompromise the performance of all detectors in the study with plausiblegenerations, underscoring the urgent need to improve the robustness ofLLM-generated text detection systems.",,arXiv,"['cs.cl', 'cs.lg']",, large language models encode clinical knowledge,"['Karan Singhal', 'Shekoofeh Azizi', 'Tao Tu', 'S. Sara Mahdavi', 'Jason Wei', 'Hyung Won Chung', 'Nathan Scales', 'Ajay Tanwani', 'Heather Cole-Lewis', 'Stephen Pfohl', 'Perry Payne', 'Martin Seneviratne', 'Paul Gamble', 'Chris Kelly', 'Nathaneal Scharli', 'Aakanksha Chowdhery', 'Philip Mansfield', 'Blaise Aguera y Arcas', 'Dale Webster', 'Greg S. Corrado', 'Yossi Matias', 'Katherine Chou', 'Juraj Gottweis', 'Nenad Tomasev', 'Yun Liu', 'Alvin Rajkomar', 'Joelle Barral', 'Christopher Semturs', 'Alan Karthikesalingam', 'Vivek Natarajan']",http://arxiv.org/pdf/2212.13138v1.pdf,2022-12-26,," Large language models (LLMs) have demonstrated impressive capabilities innatural language understanding and generation, but the quality bar for medicaland clinical applications is high. Today, attempts to assess models' clinicalknowledge typically rely on automated evaluations on limited benchmarks. Thereis no standard to evaluate model predictions and reasoning across a breadth oftasks. To address this, we present MultiMedQA, a benchmark combining sixexisting open question answering datasets spanning professional medical exams,research, and consumer queries; and HealthSearchQA, a new free-response datasetof medical questions searched online. We propose a framework for humanevaluation of model answers along multiple axes including factuality,precision, possible harm, and bias. In addition, we evaluate PaLM (a540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, onMultiMedQA. Using a combination of prompting strategies, Flan-PaLM achievesstate-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (USMedical License Exam questions), surpassing prior state-of-the-art by over 17%.However, human evaluation reveals key gaps in Flan-PaLM responses. To resolvethis we introduce instruction prompt tuning, a parameter-efficient approach foraligning LLMs to new domains using a few exemplars. The resulting model,Med-PaLM, performs encouragingly, but remains inferior to clinicians. We showthat comprehension, recall of knowledge, and medical reasoning improve withmodel scale and instruction prompt tuning, suggesting the potential utility ofLLMs in medicine. Our human evaluations reveal important limitations of today'smodels, reinforcing the importance of both evaluation frameworks and methoddevelopment in creating safe, helpful LLM models for clinical applications.",,arXiv,['cs.cl'],, instructuie multitask instruction tuning for unified information extraction,"['Xiao Wang', 'Weikang Zhou', 'Can Zu', 'Han Xia', 'Tianze Chen', 'Yuansen Zhang', 'Rui Zheng', 'Junjie Ye', 'Qi Zhang', 'Tao Gui', 'Jihua Kang', 'Jingsheng Yang', 'Siyuan Li', 'Chunsai Du']",http://arxiv.org/pdf/2304.08085v1.pdf,2023-04-17,," Large language models have unlocked strong multi-task capabilities fromreading instructive prompts. However, recent studies have shown that existinglarge models still have difficulty with information extraction tasks. Forexample, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset,which is significantly lower than the state-of-the-art performance. In thispaper, we propose InstructUIE, a unified information extraction framework basedon instruction tuning, which can uniformly model various information extractiontasks and capture the inter-task dependency. To validate the proposed method,we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extractiondatasets in a unified text-to-text format with expert-written instructions.Experimental results demonstrate that our method achieves comparableperformance to Bert in supervised settings and significantly outperforms thestate-of-the-art and gpt3.5 in zero-shot settings.",,arXiv,"['cs.cl', 'cs.ai']",, fewshot instruction prompts for pretrained language models to detect social biases,"['Shrimai Prabhumoye', 'Rafal Kocielnik', 'Mohammad Shoeybi', 'Anima Anandkumar', 'Bryan Catanzaro']",http://arxiv.org/pdf/2112.07868v2.pdf,2021-12-15,," Detecting social bias in text is challenging due to nuance, subjectivity, anddifficulty in obtaining good quality labeled datasets at scale, especiallygiven the evolving nature of social biases and society. To address thesechallenges, we propose a few-shot instruction-based method for promptingpre-trained language models (LMs). We select a few class-balanced exemplarsfrom a small support repository that are closest to the query to be labeled inthe embedding space. We then provide the LM with instruction that consists ofthis subset of labeled exemplars, the query text to be classified, a definitionof bias, and prompt it to make a decision. We demonstrate that large LMs usedin a few-shot context can detect different types of fine-grained biases withsimilar and sometimes superior accuracy to fine-tuned models. We observe thatthe largest 530B parameter model is significantly more effective in detectingsocial bias compared to smaller models (achieving at least 13% improvement inAUC metric compared to other models). It also maintains a high AUC (droppingless than 2%) when the labeled repository is reduced to as few as $100$samples. Large pretrained language models thus make it easier and quicker tobuild new bias detectors.",,arXiv,"['cs.cl', 'cs.ai']",, benchmarking a foundation llm on its ability to relabel structure names in accordance with the aapm tg263 report,"['Jason Holmes', 'Lian Zhang', 'Yuzhen Ding', 'Hongying Feng', 'Zhengliang Liu', 'Tianming Liu', 'William W. Wong', 'Sujay A. Vora', 'Jonathan B. Ashman', 'Wei Liu']",http://arxiv.org/pdf/2310.03874v1.pdf,2023-10-05,," Purpose: To introduce the concept of using large language models (LLMs) tore-label structure names in accordance with the American Association ofPhysicists in Medicine (AAPM) Task Group (TG)-263 standard, and to establish abenchmark for future studies to reference. Methods and Materials: The Generative Pre-trained Transformer (GPT)-4application programming interface (API) was implemented as a Digital Imagingand Communications in Medicine (DICOM) storage server, which upon receiving astructure set DICOM file, prompts GPT-4 to re-label the structure names of bothtarget volumes and normal tissues according to the AAPM TG-263. Three diseasesites, prostate, head and neck, and thorax were selected for evaluation. Foreach disease site category, 150 patients were randomly selected for manuallytuning the instructions prompt (in batches of 50) and 50 patients were randomlyselected for evaluation. Structure names that were considered were those thatwere most likely to be relevant for studies utilizing structure contours formany patients. Results: The overall re-labeling accuracy of both target volumes and normaltissues for prostate, head and neck, and thorax cases was 96.0%, 98.5%, and96.9% respectively. Re-labeling of target volumes was less accurate on averageexcept for prostate - 100%, 93.1%, and 91.1% respectively. Conclusions: Given the accuracy of GPT-4 in re-labeling structure names ofboth target volumes and normal tissues as presented in this work, LLMs arepoised to be the preferred method for standardizing structure names inradiation oncology, especially considering the rapid advancements in LLMcapabilities that are likely to continue.",,arXiv,"['physics.med-ph', 'cs.cl']",, zeroshot information extraction from radiological reports using chatgpt,"['Danqing Hu', 'Bing Liu', 'Xiaofeng Zhu', 'Xudong Lu', 'Nan Wu']",http://arxiv.org/pdf/2309.01398v2.pdf,2023-09-04,," Electronic health records contain an enormous amount of valuable information,but many are recorded in free text. Information extraction is the strategy totransform the sequence of characters into structured data, which can beemployed for secondary analysis. However, the traditional informationextraction components, such as named entity recognition and relationextraction, require annotated data to optimize the model parameters, which hasbecome one of the major bottlenecks in building information extraction systems.With the large language models achieving good performances on variousdownstream NLP tasks without parameter tuning, it becomes possible to use largelanguage models for zero-shot information extraction. In this study, we aim toexplore whether the most popular large language model, ChatGPT, can extractuseful information from the radiological reports. We first design the prompttemplate for the interested information in the CT reports. Then, we generatethe prompts by combining the prompt template with the CT reports as the inputsof ChatGPT to obtain the responses. A post-processing module is developed totransform the responses into structured extraction results. We conducted theexperiments with 847 CT reports collected from Peking University CancerHospital. The experimental results indicate that ChatGPT can achievecompetitive performances for some extraction tasks compared with the baselineinformation extraction system, but some limitations need to be furtherimproved.",,arXiv,['cs.cl'],, healthprompt a zeroshot learning paradigm for clinical natural language processing,"['Sonish Sivarajkumar', 'Yanshan Wang']",http://arxiv.org/pdf/2203.05061v1.pdf,2022-03-09,," Deep learning algorithms are dependent on the availability of large-scaleannotated clinical text datasets. The lack of such publicly available datasetsis the biggest bottleneck for the development of clinical Natural LanguageProcessing(NLP) systems. Zero-Shot Learning(ZSL) refers to the use of deeplearning models to classify instances from new classes of which no trainingdata have been seen before. Prompt-based learning is an emerging ZSL techniquewhere we define task-based templates for NLP tasks. We developed a novelprompt-based clinical NLP framework called HealthPrompt and applied theparadigm of prompt-based learning on clinical texts. In this technique, ratherthan fine-tuning a Pre-trained Language Model(PLM), the task definitions aretuned by defining a prompt template. We performed an in-depth analysis ofHealthPrompt on six different PLMs in a no-data setting. Our experiments provethat prompts effectively capture the context of clinical texts and performremarkably well without any training data.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir']",, a fewshot approach to resume information extraction via prompts,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2209.09450v2.pdf,2022-09-20,," Prompt learning's fine-tune performance on text classification tasks hasattracted the NLP community. This paper applies it to resume informationextraction, improving existing methods for this task. We created manualtemplates and verbalizers tailored to resume texts and compared the performanceof Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced theverbalizer design for Knowledgeable Prompt-tuning, contributing to prompttemplate design across NLP tasks. We present the Manual KnowledgeableVerbalizer (MKV), a rule for constructing verbalizers for specificapplications. Our tests show that MKV rules yield more effective, robusttemplates and verbalizers than existing methods. Our MKV approach resolvedsample imbalance, surpassing current automatic prompt methods. This studyunderscores the value of tailored prompt learning for resume extraction,stressing the importance of custom-designed templates and verbalizers.",,arXiv,['cs.cl'],, the prompt artists,"['Minsuk Chang', 'Stefania Druga', 'Alex Fiannaca', 'Pedro Vergani', 'Chinmay Kulkarni', 'Carrie Cai', 'Michael Terry']",http://arxiv.org/pdf/2303.12253v1.pdf,2023-03-22,," This paper examines the art practices, artwork, and motivations of prolificusers of the latest generation of text-to-image models. Through interviews,observations, and a user survey, we present a sampling of the artistic stylesand describe the developed community of practice around generative AI. We findthat: 1) the text prompt and the resulting image can be considered collectivelyas an art piece prompts as art and 2) prompt templates (prompts with ``slots''for others to fill in with their own words) are developed to create generativeart styles. We discover that the value placed by this community on uniqueoutputs leads to artists seeking specialized vocabulary to produce distinctiveart pieces (e.g., by reading architectural blogs to find phrases to describeimages). We also find that some artists use ""glitches"" in the model that can beturned into artistic styles of their own right. From these findings, we outlinespecific implications for design regarding future prompting and image editingoptions.",,arXiv,['cs.hc'],, estimating uncertainty in multimodal foundation models using public internet data,"['Shiladitya Dutta', 'Hongbo Wei', 'Lars van der Laan', 'Ahmed M. Alaa']",http://arxiv.org/pdf/2310.09926v2.pdf,2023-10-15,," Foundation models are trained on vast amounts of data at scale usingself-supervised learning, enabling adaptation to a wide range of downstreamtasks. At test time, these models exhibit zero-shot capabilities through whichthey can classify previously unseen (user-specified) categories. In this paper,we address the problem of quantifying uncertainty in these zero-shotpredictions. We propose a heuristic approach for uncertainty estimation inzero-shot settings using conformal prediction with web data. Given a set ofclasses at test time, we conduct zero-shot classification with CLIP-stylemodels using a prompt template, e.g., ""an image of a "", and use thesame template as a search query to source calibration data from the open web.Given a web-based calibration set, we apply conformal prediction with a novelconformity score that accounts for potential errors in retrieved web data. Weevaluate the utility of our proposed method in Biomedical foundation models;our preliminary results show that web-based conformal prediction sets achievethe target coverage with satisfactory efficiency on a variety of biomedicaldatasets.",,arXiv,['cs.ai'],, beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels,"['Honglei Zhuang', 'Zhen Qin', 'Kai Hui', 'Junru Wu', 'Le Yan', 'Xuanhui Wang', 'Michael Bendersky']",http://arxiv.org/pdf/2310.14122v2.pdf,2023-10-21,," Zero-shot text rankers powered by recent LLMs achieve remarkable rankingperformance by simply prompting. Existing prompts for pointwise LLM rankersmostly ask the model to choose from binary relevance labels like ""Yes"" and""No"". However, the lack of intermediate relevance label options may cause theLLM to provide noisy or biased answers for documents that are partiallyrelevant to the query. We propose to incorporate fine-grained relevance labelsinto the prompt for LLM rankers, enabling them to better differentiate amongdocuments with different levels of relevance to the query and thus derive amore accurate ranking. We study two variants of the prompt template, coupledwith different numbers of relevance levels. Our experiments on 8 BEIR data setsshow that adding fine-grained relevance labels significantly improves theperformance of LLM rankers.",,arXiv,['cs.ir'],, "large language models can share images, too!","['Young-Jun Lee', 'Jonghwan Hyeon', 'Ho-Jin Choi']",http://arxiv.org/pdf/2310.14804v1.pdf,2023-10-23,," This paper explores the image-sharing capability of Large Language Models(LLMs), such as InstructGPT, ChatGPT, and GPT-4, in a zero-shot setting,without the help of visual foundation models. Inspired by the two-stage processof image-sharing in human dialogues, we propose a two-stage framework thatallows LLMs to predict potential image-sharing turns and generate related imagedescriptions using our effective restriction-based prompt template. Withextensive experiments, we unlock the \textit{image-sharing} capability of LLMsin zero-shot prompting, with GPT-4 achieving the best performance.Additionally, we uncover the emergent \textit{image-sharing} ability inzero-shot prompting, demonstrating the effectiveness of restriction-basedprompts in both stages of our framework. Based on this framework, we augmentthe PhotoChat dataset with images generated by Stable Diffusion at predictedturns, namely PhotoChat++. To our knowledge, this is the first study to assessthe image-sharing ability of LLMs in a zero-shot setting without visualfoundation models. The source code and the dataset will be released afterpublication.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, promptbased zeroshot relation extraction with semantic knowledge augmentation,"['Jiaying Gong', 'Hoda Eldardiry']",http://arxiv.org/pdf/2112.04539v2.pdf,2021-12-08,," In relation triplet extraction (RTE), recognizing unseen (new) relations forwhich there are no training instances is a challenging task. Efforts have beenmade to recognize unseen relations based on question-answering models orrelation descriptions. However, these approaches miss the semantic informationabout connections between seen and unseen relations. In this paper, We proposea prompt-based model with semantic knowledge augmentation (ZS-SKA) to recognizeunseen relations under the zero-shot setting. We present a new word-levelanalogy-based sentence translation rule and generate augmented instances withunseen relations from instances with seen relations using that new rule. Wedesign prompts with weighted virtual label construction based on an externalknowledge graph to integrate semantic knowledge information learned from seenrelations. Instead of using the actual label sets in the prompt template, weconstruct weighted virtual label words. We learn the representations of bothseen and unseen relations with augmented instances and prompts. We thencalculate the distance between the generated representations using prototypicalnetworks to predict unseen relations. Extensive experiments conducted on threepublic datasets FewRel, Wiki-ZSL, and NYT, show that ZS-SKA outperformsstate-of-the-art methods under the zero-shot scenarios. Our experimentalresults also demonstrate the effectiveness and robustness of ZS-SKA.",,arXiv,['cs.cl'],, adapting prompt for fewshot tabletotext generation,"['Zhixin Guo', 'Minyxuan Yan', 'Jiexing Qi', 'Jianping Zhou', 'Ziwei He', 'Zhouhan Lin', 'Guanjie Zheng', 'Xinbing Wang']",http://arxiv.org/pdf/2302.12468v2.pdf,2023-02-24,," Pretrained language models (PLMs) have made remarkable progress intable-to-text generation tasks. However, the lack of domain-specific knowledgemakes it challenging to bridge the topological gap between tabular data andtext, especially in real-world applications with limited resources. To mitigatethe limitation of insufficient labeled data, we propose a novel framework:Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adaptprompt templates of domain-specific knowledge into the model, which brings atleast three benefits: (1) it injects representation of normal table-relateddescriptions to bridge the topological gap between tabular data and texts; (2)it enables us to use large amounts of unlabeled domain-specific knowledgefully, which can alleviate the PLMs' inherent shortcomings of lacking domainknowledge; (3) it allows us to design various tasks to explore thedomain-specific knowledge. Extensive experiments and analyses are conducted onthree open-domain few-shot natural language generation (NLG) data sets: Humans,Songs, and Books. Compared to previous state-of-the-art approaches, our modelachieves superior performance in terms of both fluency and accuracy.",,arXiv,['cs.cl'],, revisit input perturbation problems for llms a unified robustness evaluation framework for noisy slot filling task,"['Guanting Dong', 'Jinxu Zhao', 'Tingfeng Hui', 'Daichi Guo', 'Wenlong Wan', 'Boqi Feng', 'Yueyan Qiu', 'Zhuoma Gongque', 'Keqing He', 'Zechen Wang', 'Weiran Xu']",http://arxiv.org/pdf/2310.06504v1.pdf,2023-10-10,," With the increasing capabilities of large language models (LLMs), thesehigh-performance models have achieved state-of-the-art results on a wide rangeof natural language processing (NLP) tasks. However, the models' performance oncommonly-used benchmark datasets often fails to accurately reflect theirreliability and robustness when applied to real-world noisy data. To addressthese challenges, we propose a unified robustness evaluation framework based onthe slot-filling task to systematically evaluate the dialogue understandingcapability of LLMs in diverse input perturbation scenarios. Specifically, weconstruct a input perturbation evaluation dataset, Noise-LLM, which containsfive types of single perturbation and four types of mixed perturbation data.Furthermore, we utilize a multi-level data augmentation method (character,word, and sentence levels) to construct a candidate data pool, and carefullydesign two ways of automatic task demonstration construction strategies(instance-level and entity-level) with various prompt templates. Our aim is toassess how well various robustness methods of LLMs perform in real-world noisyscenarios. The experiments have demonstrated that the current open-source LLMsgenerally achieve limited perturbation robustness performance. Based on theseexperimental observations, we make some forward-looking suggestions to fuel theresearch in this direction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, do language models learn about legal entity types during pretraining,"['Claire Barale', 'Michael Rovatsos', 'Nehal Bhuta']",http://arxiv.org/pdf/2310.13092v1.pdf,2023-10-19,," Language Models (LMs) have proven their ability to acquire diverse linguisticknowledge during the pretraining phase, potentially serving as a valuablesource of incidental supervision for downstream tasks. However, there has beenlimited research conducted on the retrieval of domain-specific knowledge, andspecifically legal knowledge. We propose to explore the task of Entity Typing,serving as a proxy for evaluating legal knowledge as an essential aspect oftext comprehension, and a foundational task to numerous downstream legal NLPapplications. Through systematic evaluation and analysis and two types ofprompting (cloze sentences and QA-based templates) and to clarify the nature ofthese acquired cues, we compare diverse types and lengths of entities bothgeneral and domain-specific entities, semantics or syntax signals, anddifferent LM pretraining corpus (generic and legal-oriented) and architectures(encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2performs well on certain entities and exhibits potential for substantialimprovement with optimized prompt templates, (2) law-oriented LMs showinconsistent performance, possibly due to variations in their training corpus,(3) LMs demonstrate the ability to type entities even in the case ofmulti-token entities, (4) all models struggle with entities belonging tosub-domains of the law (5) Llama2 appears to frequently overlook syntacticcues, a shortcoming less present in BERT-based architectures.",,arXiv,['cs.cl'],, llamarec twostage recommendation using large language models for ranking,"['Zhenrui Yue', 'Sara Rabhi', 'Gabriel de Souza Pereira Moreira', 'Dong Wang', 'Even Oldridge']",http://arxiv.org/pdf/2311.02089v1.pdf,2023-10-25,," Recently, large language models (LLMs) have exhibited significant progress inlanguage understanding and generation. By leveraging textual features,customized LLMs are also applied for recommendation and demonstrateimprovements across diverse recommendation scenarios. Yet the majority ofexisting methods perform training-free recommendation that heavily relies onpretrained knowledge (e.g., movie recommendation). In addition, inference onLLMs is slow due to autoregressive generation, rendering existing methods lesseffective for real-time recommendation. As such, we propose a two-stageframework using large language models for ranking-based recommendation(LlamaRec). In particular, we use small-scale sequential recommenders toretrieve candidates based on the user interaction history. Then, both historyand retrieved items are fed to the LLM in text via a carefully designed prompttemplate. Instead of generating next-item titles, we adopt a verbalizer-basedapproach that transforms output logits into probability distributions over thecandidate items. Therefore, the proposed LlamaRec can efficiently rank itemswithout generating long text. To validate the effectiveness of the proposedframework, we compare against state-of-the-art baseline methods on benchmarkdatasets. Our experimental results demonstrate the performance of LlamaRec,which consistently achieves superior performance in both recommendationperformance and efficiency.",,arXiv,"['cs.ir', 'cs.ai', 'cs.cl']",, masterkey automated jailbreaking of large language model chatbots,"['Gelei Deng', 'Yi Liu', 'Yuekang Li', 'Kailong Wang', 'Ying Zhang', 'Zefeng Li', 'Haoyu Wang', 'Tianwei Zhang', 'Yang Liu']",https://doi.org/10.14722/ndss.2024.24188,2023-07-16,,"Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI) services due to their exceptional proficiency in understanding and generating human-like text. LLM chatbots, in particular, have seen widespread adoption, transforming human-machine interactions. However, these LLM chatbots are susceptible to""jailbreak""attacks, where malicious users manipulate prompts to elicit inappropriate or sensitive responses, contravening service policies. Despite existing attempts to mitigate such threats, our research reveals a substantial gap in our understanding of these vulnerabilities, largely due to the undisclosed defensive measures implemented by LLM service providers. In this paper, we present Jailbreaker, a comprehensive framework that offers an in-depth understanding of jailbreak attacks and countermeasures. Our work makes a dual contribution. First, we propose an innovative methodology inspired by time-based SQL injection techniques to reverse-engineer the defensive strategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat. This time-sensitive approach uncovers intricate details about these services' defenses, facilitating a proof-of-concept attack that successfully bypasses their mechanisms. Second, we introduce an automatic generation method for jailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential of automated jailbreak generation across various commercial LLM chatbots. Our method achieves a promising average success rate of 21.58%, significantly outperforming the effectiveness of existing techniques. We have responsibly disclosed our findings to the concerned service providers, underscoring the urgent need for more robust defenses. Jailbreaker thus marks a significant step towards understanding and mitigating jailbreak threats in the realm of LLM chatbots.",6987c95f7054d2653178ac93df52aa3c0b99fcf5,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of prompt-engineered LLMs for classifying and repairing vulnerabilities in smart contracts, making it highly relevant to the topic of prompt engineering." "large language model is not a good fewshot information extractor, but a good reranker for hard samples!","['Yubo Ma', 'Yixin Cao', 'YongChing Hong', 'Aixin Sun']",http://arxiv.org/pdf/2303.08559,2023-03-15,,"Large Language Models (LLMs) have made remarkable strides in various tasks. Whether LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains an open problem. In this work, we aim to provide a thorough answer to this question. Through extensive experiments on nine datasets across four IE tasks, we demonstrate that current advanced LLMs consistently exhibit inferior performance, higher latency, and increased budget requirements compared to fine-tuned SLMs under most settings. Therefore, we conclude that LLMs are not effective few-shot information extractors in general. Nonetheless, we illustrate that with appropriate prompting strategies, LLMs can effectively complement SLMs and tackle challenging samples that SLMs struggle with. And moreover, we propose an adaptive filter-then-rerank paradigm to combine the strengths of LLMs and SLMs. In this paradigm, SLMs serve as filters and LLMs serve as rerankers. By prompting LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.4% F1-gain on average) on various IE tasks, with an acceptable time and cost investment.",0100785773b8217c44606ab260e3212f93b0a4fd,Semantic Scholar,,highly relevant,"The paper discusses prompting as a technique, specifically mentioning the use of pre-trained large language models for the CZSL task which falls under the umbrella of prompt engineering." retrieving supporting evidence for generative question answering,"['Siqing Huo', 'Negar Arabzadeh', 'Charlie Clarke']",https://arxiv.org/pdf/2309.11392,2023-09-20,,"Current large language models (LLMs) can exhibit near-human levels of performance on many natural language-based tasks, including open-domain question answering. Unfortunately, at this time, they also convincingly hallucinate incorrect answers, so that responses to questions must be verified against external sources before they can be accepted at face value. In this paper, we report two simple experiments to automatically validate generated answers against a corpus. We base our experiments on questions and passages from the MS MARCO (V1) test collection, and a retrieval pipeline consisting of sparse retrieval, dense retrieval and neural rerankers. In the first experiment, we validate the generated answer in its entirety. After presenting a question to an LLM and receiving a generated answer, we query the corpus with the combination of the question + generated answer. We then present the LLM with the combination of the question + generated answer + retrieved answer, prompting it to indicate if the generated answer can be supported by the retrieved answer. In the second experiment, we consider the generated answer at a more granular level, prompting the LLM to extract a list of factual statements from the answer and verifying each statement separately. We query the corpus with each factual statement and then present the LLM with the statement and the corresponding retrieved evidence. The LLM is prompted to indicate if the statement can be supported and make necessary edits using the retrieved material. With an accuracy of over 80%, we find that an LLM is capable of verifying its generated answer when a corpus of supporting material is provided. However, manual assessment of a random sample of questions reveals that incorrect generated answers are missed by this verification process. While this verification process can reduce hallucinations, it can not entirely eliminate them.",0630a18fe3fe4765132ad52a591f9776cf3284bf,Semantic Scholar,,highly relevant,"The paper focuses on the design of a framework for generating different types of prompts for LLMs based on UI affordances, as well as an application of these prompts, which aligns with the interest in hard prefix prompting and prompt engineering techniques." two timin’ repairing smart contracts with a twolayered approach,"['Abhinav Jain', 'Ehan Masud', 'Michelle Han', 'Rohan Dhillon', 'Sumukh Rao', 'Arya Joshi', 'Salar Cheema', 'Saurav Kumar']",https://arxiv.org/pdf/2309.07841,2023-09-14,,"Due to the modern relevance of blockchain technology, smart contracts present both substantial risks and benefits. Vulnerabilities within them can trigger a cascade of consequences, resulting in significant losses. Many current papers primarily focus on classifying smart contracts for malicious intent, often relying on limited contract characteristics, such as bytecode or opcode. This paper proposes a novel, two-layered framework: 1) classifying and 2) directly repairing malicious contracts. Slither’s vulnerability report is combined with source code and passed through a pre-trained RandomForestClassifier (RFC) and Large Language Models (LLMs), classifying and repairing each suggested vulnerability. Experiments demonstrate the effectiveness of fine-tuned and prompt-engineered LLMs. The smart contract repair models, built from pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall vulnerability count by 97.5% and 96.7% respectively. A manual inspection of repaired contracts shows that all retain functionality, indicating that the proposed method is appropriate for automatic batch classification and repair of vulnerabilities in smart contracts.",0afb64ce430c5f26752c8aed246ead6820b02049,Semantic Scholar,,highly relevant,"The paper discusses program-of-thought prompting for LLMs, which falls under the category of prompt engineering, specifically in the context of using programming logic to enhance reasoning abilities." prd peer rank and discussion improve large language model based evaluations,"['Ruosen Li', 'Teerth Patel', 'Xinya Du']",https://arxiv.org/pdf/2307.02762,2023-07-06,,"Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized""strongest""LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.",130d18d1d455336e1a5b06c85784894bb67d87ec,Semantic Scholar,,highly relevant,"The paper directly addresses the use of prompt instruction tuning and proposes a 'Mixture of Prompts' (MoPs) methodology for adapting LLMs to various tasks, which is central to the concept of prompt engineering." conavgpt multirobot cooperative visual semantic navigation using large language models,"['Bangguo Yu', 'H. Kasaei', 'Ming Cao']",https://arxiv.org/pdf/2310.07937,2023-10-11,,"In advanced human-robot interaction tasks, visual target navigation is crucial for autonomous robots navigating unknown environments. While numerous approaches have been developed in the past, most are designed for single-robot operations, which often suffer from reduced efficiency and robustness due to environmental complexities. Furthermore, learning policies for multi-robot collaboration are resource-intensive. To address these challenges, we propose Co-NavGPT, an innovative framework that integrates Large Language Models (LLMs) as a global planner for multi-robot cooperative visual target navigation. Co-NavGPT encodes the explored environment data into prompts, enhancing LLMs' scene comprehension. It then assigns exploration frontiers to each robot for efficient target search. Experimental results on Habitat-Matterport 3D (HM3D) demonstrate that Co-NavGPT surpasses existing models in success rates and efficiency without any learning process, demonstrating the vast potential of LLMs in multi-robot collaboration domains. The supplementary video, prompts, and code can be accessed via the following link: https://sites.google.com/view/co-navgpt",16ecaa7cf142605331fc21c9be73c7b13e8c1acd,Semantic Scholar,,highly relevant,"The paper discusses using prompts as executable code in the context of developing AI-native services, which directly relates to the application of prompt engineering." retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain,"['Chunxi Guo', 'Zhiliang Tian', 'Jintao Tang', 'Shasha Li', 'Zhihua Wen', 'Kaixuan Wang', 'Ting Wang']",https://arxiv.org/pdf/2307.05074,2023-07-11,,"Text-to-SQL aims at generating SQL queries for the given natural language questions and thus helping users to query databases. Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL. However, it faces challenges with strict SQL syntax requirements. Existing work prompts the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL, but the fixed prompts can hardly handle the scenario where the semantic gap between the retrieved demonstration and the input question is large. In this paper, we propose a retrieval-augmented prompting method for a LLM-based Text-to-SQL framework, involving sample-aware prompting and a dynamic revision chain. Our approach incorporates sample-aware demonstrations, which include the composition of SQL operators and fine-grained information related to the given question. To retrieve questions sharing similar intents with input questions, we propose two strategies for assisting retrieval. Firstly, we leverage LLMs to simplify the original questions, unifying the syntax and thereby clarifying the users' intentions. To generate executable and accurate SQLs without human intervention, we design a dynamic revision chain which iteratively adapts fine-grained feedback from the previously generated SQL. Experimental results on three Text-to-SQL benchmarks demonstrate the superiority of our method over strong baseline models.",191e300e381d4128b749d16fe3d83c8643a3bd1f,Semantic Scholar,,highly relevant,"The paper describes how carefully crafting prompts for LLMs improves text-based action generation, indicating a focus on prompt engineering." regionblip a unified multimodal pretraining framework for holistic and regional comprehension,"['Qiang Zhou', 'Chaohui Yu', 'Shaofeng Zhang', 'Sitong Wu', 'Zhibin Wang', 'Fan Wang']",https://arxiv.org/pdf/2308.02299,2023-08-03,,"In this work, we investigate extending the comprehension of Multi-modal Large Language Models (MLLMs) to regional objects. To this end, we propose to extract features corresponding to regional objects as soft prompts for LLM, which provides a straightforward and scalable approach and eliminates the need for LLM fine-tuning. To effectively extract regional features from regular image features and irregular point cloud features, we present a novel and unified position-assisted feature extraction module. Furthermore, training an MLLM from scratch is highly time-consuming. Thus, we propose incrementally extending existing pre-trained MLLMs to comprehend more modalities and the regional objects of those modalities. Specifically, we freeze the Q-Former from BLIP-2, an impressive MLLM, and optimize the modality-specific Lora parameters in Q-Former and LLM for each newly introduced modality. The freezing of the Q-Former eliminates the need for extensive pre-training on massive image-text data. The freezed Q-Former pre-trained from massive image-text data is also beneficial for the pre-training on image-region-text data. We name our framework RegionBLIP. We pre-train RegionBLIP on image-region-text, point-cloud-text, and point-cloud-region-text data. Experimental results verify that \Ours{} can preserve the image comprehension capability of BILP-2 and further gain a comprehension of the newly introduced point cloud modality and regional objects. The Data, Code, and Pre-trained models will be available at https://github.com/mightyzau/RegionBLIP.",1ee8c8dd9d04247515b33775532b72df7b8ec0f3,Semantic Scholar,,highly relevant,"The paper investigates the mechanism of human emotion inference in large language models through the use of prompts to activate artificial neurons, which is directly related to the concept of prompt engineering." rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought,"['Tianci Xue', 'Ziqi Wang', 'Zhenhailong Wang', 'Chi Han', 'Pengfei Yu', 'Heng Ji']",https://arxiv.org/pdf/2305.11499,2023-05-19,,"Large language Models (LLMs) have achieved promising performance on arithmetic reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting. However, LLMs face challenges in maintaining factual consistency during reasoning, exhibiting tendencies to condition overlooking, question misinterpretation, and condition hallucination over given problems. Existing methods use coarse-grained feedback (e.g., whether the answer is correct) to improve factual consistency. In this work, we propose RCoT (Reversing Chain-of-Thought), a novel method to improve LLMs' reasoning abilities by automatically detecting and rectifying factual inconsistency in LLMs, generated solutions. To detect factual inconsistency, RCoT first asks LLMs to reconstruct the problem based on generated solutions. Then fine-grained comparisons between the original problem and the reconstructed problem expose the factual inconsistency in the original solutions. To rectify the solution, RCoT formulates detected factual inconsistency into fine-grained feedback to guide LLMs in revising solutions. Experimental results demonstrate improvements of RCoT over standard CoT, Self-Consistency and Self-Refine across seven arithmetic datasets. Moreover, we find that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities (e.g., ChatGPT reaches 94.6% accuracy on GSM8K), encouraging the community to further explore the fine-grained feedback generation methods.",22d5459d1f47341b355feeb1becc37208d6ec365,Semantic Scholar,,highly relevant,"The paper explicitly mentions its focus on prompting with various large language models for the task of dialog evaluation, highlighting the importance of the structure of the prompt." language models enable simple systems for generating structured views of heterogeneous data lakes,"['Simran Arora', 'Brandon Yang', 'Sabri Eyuboglu', 'A. Narayan', 'Andrew Hojel', 'Immanuel Trummer', 'Christopher Ré']",http://arxiv.org/pdf/2304.09433,2023-04-19,," A long standing goal in the data management community is developing systems that input documents and output queryable tables without user effort. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions and use domain specific training. In this work, we ask whether we can maintain generality by using the in-context learning abilities of large language models (LLMs). We propose and evaluate Evaporate, a prototype system powered by LLMs. We identify two strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM. To improve quality while maintaining low cost, we propose an extended implementation, Evaporate-Code+, which achieves better quality than direct extraction. Our insight is to generate many candidate functions and ensemble their extractions using weak supervision. Evaporate-Code+ outperforms the state-of-the art systems using a sublinear pass over the documents with the LLM. This equates to a 110X reduction in the number of documents the LLM needs to process across our 16 real-world evaluation settings. ",2ef1c2438c3a4552db9e7080e15d8c51bc071f58,Semantic Scholar,,highly relevant,"The paper focuses on creating prompts for each demonstrated example, which directly ties in with hard prefix prompting techniques used in prompt engineering." prompting languageinformed distribution for compositional zeroshot learning,"['Wentao Bao', 'Lichang Chen', 'Heng Huang', 'Yu Kong']",https://arxiv.org/pdf/2305.14428,2023-05-23,,"Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e.g., sliced tomatoes, where the model is learned only from the seen compositions, e.g., sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, i.e., state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to 1) formulate the language-informed class distributions which are diverse and informative, and 2) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module and a stochastic logit mixup (SLM) strategy are proposed to dynamically fuse the decisions from the compositional and the primitive logit space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.",2ff69c238e26c473a6d8bcbb9292ded74d7fd1c2,Semantic Scholar,,highly relevant,"The paper describes using extracted keywords to prompt an LLM for generating medical context, which is then used for SLM decision-making enhancement, directly relating to hard prefix prompting." prompt middleware mapping prompts for large language models to ui affordances,"['S. Macneil', 'Andrew Tran', 'Joanne Kim', 'Ziheng Huang', 'Seth Bernstein', 'Dan Mogil']",http://arxiv.org/pdf/2307.01142,2023-07-03,,"To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.",34b35c89e192b5aa3118f667ce0a3cc0d89d82c3,Semantic Scholar,,highly relevant,"The paper mentions utilizing 'abstracted prompting procedures' with Code-LLMs for story understanding, directly relating to the use of prompting techniques." when do programofthoughts work for reasoning,"['Zhen Bi', 'Ningyu Zhang', 'Yinuo Jiang', 'Shumin Deng', 'Guozhou Zheng', 'Huajun Chen']",https://arxiv.org/pdf/2308.15452,2023-08-29,,"In the realm of embodied artificial intelligence, the reasoning capabilities of Large Language Models (LLMs) play a pivotal role. Although there are effective methods like program-of-thought prompting for LLMs which uses programming language to tackle complex reasoning tasks, the specific impact of code data on the improvement of reasoning capabilities remains under-explored. To address this gap, we propose complexity-impacted reasoning score (CIRS), which combines structural and logical attributes, to measure the correlation between code and reasoning abilities. Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity by considering the difficulty and the cyclomatic complexity. Through an empirical analysis, we find not all code data of complexity can be learned or understood by LLMs. Optimal level of complexity is critical to the improvement of reasoning abilities by program-aided prompting. Then we design an auto-synthesizing and stratifying algorithm, and apply it to instruction generation for mathematical reasoning and code data filtering for code generation tasks. Extensive results demonstrates the effectiveness of our proposed approach. Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.",412fe1f135cb20c952962133ca1e534a71bfd27f,Semantic Scholar,,somewhat relevant,"The paper indicates the use of prompting with LLMs during training for dataset creation, which is related to prompt engineering." sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation,"['Chen Dun', 'Mirian Hipolito Garcia', 'Guoqing Zheng', 'A. Awadallah', 'Anastasios Kyrillidis', 'Robert Sim']",https://arxiv.org/pdf/2310.02842,2023-10-04,,"Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind. Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new -- but often individual -- downstream tasks. Thus, how one would expand prompt tuning to handle -- concomitantly -- heterogeneous tasks and data distributions is a widely open question. To address this gap, we suggest the use of \emph{Mixture of Prompts}, or MoPs, associated with smart gating functionality: the latter -- whose design is one of the contributions of this paper -- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task. Additionally, MoPs are empirically agnostic to any model compression technique applied -- for efficiency reasons -- as well as instruction data source and task composition. In practice, MoPs can simultaneously mitigate prompt training""interference""in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations. As a highlight, MoPs manage to decrease final perplexity from $\sim20\%$ up to $\sim70\%$, as compared to baselines, in the federated scenario, and from $\sim 3\%$ up to $\sim30\%$ in the centralized scenario.",45ee010607cad91728ae7fbad6cce3d805b93526,Semantic Scholar,,highly relevant,"The paper describes using generated prompts for LLMs based on crawled data from websites to detect phishing sites, indicating an application of prompt engineering." prompt sapper llmempowered software engineering infrastructure for ainative services,"['Zhenchang Xing', 'Qing Huang', 'Yu Cheng', 'Liming Zhu', 'Qinghua Lu', 'Xiwei Xu']",http://arxiv.org/pdf/2306.02230,2023-06-04,,"Foundation models, such as GPT-4, DALL-E have brought unprecedented AI""operating system""effect and new forms of human-AI interaction, sparking a wave of innovation in AI-native services, where natural language prompts serve as executable""code""directly (prompt as executable code), eliminating the need for programming language as an intermediary and opening up the door to personal AI. Prompt Sapper has emerged in response, committed to support the development of AI-native services by AI chain engineering. It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence, unleashing the AI innovation potential of every individual, and forging a future where everyone can be a master of AI innovation. This article will introduce the R\&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices.",486a8c8655b81c7f87ff257141466ec1186d4aea,Semantic Scholar,,highly relevant,"The paper focuses on prompt injection threats and the use of natural language prompts to modulate LLM functionalities, directly pertaining to prompt engineering." actiongpt leveraging largescale language models for improved and generalized zero shot action generation,"['Sai Shashank Kalakonda', 'Shubh Maheshwari', 'Ravi Kiran Sarvadevabhatla']",http://arxiv.org/pdf/2211.15603,,,"We introduce Action-GPT, a plug and play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. Our experiments show qualitative and quantitative improvement in the quality of synthesized motions produced by recent text-to-motion models. Code, pretrained models and sample videos will be made available at https://actiongpt.github.io .",488a27aacfebfef0071017bdc6407d7d515e2e2d,Semantic Scholar,,highly relevant,"The paper describes using LLM to interpret task instructions and generate actions, which is a form of prompt engineering." human emotion knowledge representation emerges in large language model and supports discrete emotion inference,"['Ming Li', 'Yusheng Su', 'Hsiu-Yuan Huang', 'Jiali Cheng', 'Xin Hu', 'Xinmiao Zhang', 'Huadong Wang', 'Yujia Qin', 'Xiaozhi Wang', 'Zhi-Yun Liu', 'Dan Zhang']",https://arxiv.org/pdf/2302.09582,,,"How humans infer discrete emotions is a fundamental research question in the field of psychology. While conceptual knowledge about emotions (emotion knowledge) has been suggested to be essential for emotion inference, evidence to date is mostly indirect and inconclusive. As the large language models (LLMs) have been shown to support effective representations of various human conceptual knowledge, the present study further employed artificial neurons in LLMs to investigate the mechanism of human emotion inference. With artificial neurons activated by prompts, the LLM (RoBERTa) demonstrated a similar conceptual structure of 27 discrete emotions as that of human behaviors. Furthermore, the LLM-based conceptual structure revealed a human-like reliance on 14 underlying conceptual attributes of emotions for emotion inference. Most importantly, by manipulating attribute-specific neurons, we found that the corresponding LLM's emotion inference performance deteriorated, and the performance deterioration was correlated to the effectiveness of representations of the conceptual attributes on the human side. Our findings provide direct evidence for the emergence of emotion knowledge representation in large language models and suggest its casual support for discrete emotion inference. # These authors contributed equally: liming16@tsinghua.org.cn, yushengsu.thu@gmail.com * Corresponding authors: {liuzy, dzhang}@tsinghua.edu.cn The source code can be obtained from https://github.com/thunlp/Model_Emotion.",4a8fe7ecf225e5bada08642fcd77d3cbb322b967,Semantic Scholar,,highly relevant,"The paper describes Chain-of-Knowledge (CoK) prompting, clearly engaging in prompt engineering to improve the performance of LLMs in reasoning tasks." what do llms know about financial markets a case study on reddit market sentiment analysis,"['Xiang Deng', 'Vasilisa Bashlovkina', 'Feng Han', 'Simon Baumgartner', 'Michael Bendersky']",http://arxiv.org/pdf/2212.11311,2022-12-21,,"Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon, which makes it a challenging task for human raters. The resulting lack of high-quality labeled data stands in the way of conventional supervised learning methods. Instead, we approach this problem using semi-supervised learning with a large language model (LLM). Our pipeline generates weak financial sentiment labels for Reddit posts with an LLM and then uses that data to train a small model that can be served in production. We find that prompting the LLM to produce Chain-of-Thought summaries and forcing it through several reasoning paths helps generate more stable and accurate labels, while using a regression loss further improves distillation quality. With only a handful of prompts, the final model performs on par with existing supervised models. Though production applications of our model are limited by ethical considerations, the model’s competitive performance points to the great potential of using LLMs for tasks that otherwise require skill-intensive annotation.",52136f813243ac3de8e277906112a41590a376d4,Semantic Scholar,,somewhat relevant,"The paper discusses the use of large language models for skills extraction and matching with a specific framework, employing prompts for better performance, which is related to prompt engineering." understanding the effectiveness of very large language models on dialog evaluation,"['Jessica Huynh', 'Cathy Jiao', 'Prakhar Gupta', 'Shikib Mehri', 'Payal Bajaj', 'Vishrav Chaudhary', 'M. Eskénazi']",http://arxiv.org/pdf/2301.12004,2023-01-27,,"Language models have steadily increased in size over the past few years. They achieve a high level of performance on various natural language processing (NLP) tasks such as question answering and summarization. Large language models (LLMs) have been used for generation and can now output human-like text. Due to this, there are other downstream tasks in the realm of dialog that can now harness the LLMs' language understanding capabilities. Dialog evaluation is one task that this paper will explore. It concentrates on prompting with LLMs: BLOOM, OPT, GPT-3, Flan-T5, InstructDial and TNLGv2. The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured. Specifically, the more diverse and relevant the group of datasets that a model is trained on, the better dialog evaluation performs. This paper also investigates how the number of examples in the prompt and the type of example selection used affect the model's performance.",5882dd04d95c9c88cdec389059fcf44d56cbb789,Semantic Scholar,,somewhat relevant,"The paper uses pre-trained large language models in a novel way for multi-robot collaboration, involving the generation of sub-task plans through prompting LLM agents to improve their plans, which aligns with the principles of prompt engineering." planandsolve prompting improving zeroshot chainofthought reasoning by large language models,"['Lei Wang', 'Wanyu Xu', 'Yihuai Lan', 'Zhiqiang Hu', 'Yunshi Lan', 'R. Lee', 'Ee-Peng Lim']",http://arxiv.org/pdf/2305.04091,2023-05-06,,"Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, Few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy. To eliminate the manual efforts, Zero-shot-CoT concatenates the target problem statement with “Let’s think step by step” as an input prompt to LLMs. Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors. To address the missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of two components: first, devising a plan to divide the entire task into smaller subtasks, and then carrying out the subtasks according to the plan. To address the calculation errors and improve the quality of generated reasoning steps, we extend PS prompting with more detailed instructions and derive PS+ prompting. We evaluate our proposed prompting strategy on ten datasets across three reasoning problems. The experimental results over GPT-3 show that our proposed zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought Prompting, and has comparable performance with 8-shot CoT prompting on the math reasoning problem. The code can be found at https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.",62176de125738e3b95850d1227bac81fd646b78e,Semantic Scholar,,somewhat relevant,"The paper explicitly mentions the use of prompts for LLMs to categorize software supply chain security failures, which relates to the application of prompt engineering." annollm making large language models to be better crowdsourced annotators,"['Xingwei He', 'Zheng-Wen Lin', 'Yeyun Gong', 'Alex Jin', 'Hang Zhang', 'Chen Lin', 'Jian Jiao', 'S. Yiu', 'Nan Duan', 'Weizhu Chen']",http://arxiv.org/pdf/2303.16854,2023-03-30,,"Many natural language processing (NLP) tasks rely on labeled data to train machine learning models to achieve high performance. However, data annotation can be a time-consuming and expensive process, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator by providing them with sufficient guidance and demonstrated examples. To make LLMs to be better annotators, we propose a two-step approach, 'explain-then-annotate'. To be more precise, we begin by creating prompts for every demonstrated example, which we subsequently utilize to prompt a LLM to provide an explanation for why the specific ground truth answer/label was chosen for that particular example. Following this, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data. We conduct experiments on three tasks, including user input and keyword relevance assessment, BoolQ and WiC. The annotation results from GPT-3.5 surpasses those from crowdsourced annotation for user input and keyword relevance assessment. Additionally, for the other two tasks, GPT-3.5 achieves results that are comparable to those obtained through crowdsourced annotation.",70da4fb798a86cbe8cad96c27ced0415885bbd9d,Semantic Scholar,,highly relevant,"The paper details the use of inference-time dynamic prompting (IDP) as an adaptation tool for compressed LLMs, which directly aligns with the topic of prompt engineering." enhancing small medical learners with privacypreserving contextual prompting,"['Xinlu Zhang', 'SHIYANG LI', 'Xianjun Yang', 'Chenxin Tian', 'Yao Qin', 'Linda Petzold']",http://arxiv.org/pdf/2305.12723,2023-05-22,,"Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under privacy-restricted scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.",74b94891f8f7ac8d73d9df817b6720e1cb792bcc,Semantic Scholar,,highly relevant,"The paper introduces a novel prompt engineering method called Prompt-FDC for generating safety-critical software code, which is directly related to the topic of prompt engineering." corrpus codebased structured prompting for neurosymbolic story understanding,"['Yi Dong', 'Lara J. Martin', 'Chris Callison-Burch']",https://aclanthology.org/2023.findings-acl.832.pdf,2022-12-21,,"Story generation and understanding -- as with all NLG/NLU tasks -- has seen a surge in neurosymbolic work. Researchers have recognized that, while large language models (LLMs) have tremendous utility, they can be augmented with symbolic means to be even better and to make up for any flaws that the neural networks might have. However, symbolic methods are extremely costly in terms of the amount of time and expertise needed to create them. In this work, we capitalize on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use of symbolic methods for tracking the state of stories and aiding in story understanding. We show that our CoRRPUS system and abstracted prompting procedures can beat current state-of-the-art structured LLM techniques on pre-existing story understanding tasks (bAbI Task 2 and Re^3) with minimal hand engineering. We hope that this work can help highlight the importance of symbolic representations and specialized prompting for LLMs as these models require some guidance for performing reasoning tasks properly.",76f54657eb0893a0b203da57dcf0b4fffeebfc2c,Semantic Scholar,,highly relevant,"The paper directly addresses the concept of crafting effective prompts for code-generating models, which is central to the topic of prompt engineering." can large language models truly understand prompts a case study with negated prompts,"['Joel Jang', 'Seonghyeon Ye', 'Minjoon Seo']",http://arxiv.org/pdf/2209.12711,2022-09-26,,"Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT&GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms",7ce0c89a452e3c2917b63847495533865697c79c,Semantic Scholar,,somewhat relevant,"The paper focuses on using text prompts to generate music, indicating relevance to prompt engineering but does not specify the nature of these prompts as hard prefix prompts." deploying and evaluating llms to program service mobile robots,"['Zichao Hu', 'Francesca Lucchetti', 'Claire Schlesinger', 'Yash Saxena', 'Anders Freeman', 'Sadanand Modak', 'Arjun Guha', 'Joydeep Biswas']",https://arxiv.org/pdf/2311.11183,2023-11-18,,"Recent advancements in large language models (LLMs) have spurred interest in using them for generating robot programs from natural language, with promising initial results. We investigate the use of LLMs to generate programs for service mobile robots leveraging mobility, perception, and human interaction skills, and where accurate sequencing and ordering of actions is crucial for success. We contribute CodeBotler, an open-source robot-agnostic tool to program service mobile robots from natural language, and RoboEval, a benchmark for evaluating LLMs' capabilities of generating programs to complete service robot tasks. CodeBotler performs program generation via few-shot prompting of LLMs with an embedded domain-specific language (eDSL) in Python, and leverages skill abstractions to deploy generated programs on any general-purpose mobile robot. RoboEval evaluates the correctness of generated programs by checking execution traces starting with multiple initial states, and checking whether the traces satisfy temporal logic properties that encode correctness for each task. RoboEval also includes multiple prompts per task to test for the robustness of program generation. We evaluate several popular state-of-the-art LLMs with the RoboEval benchmark, and perform a thorough analysis of the modes of failures, resulting in a taxonomy that highlights common pitfalls of LLMs at generating robot programs.",7d884f1ff991eb9fff7bf31fa006196e58934b8a,Semantic Scholar,,highly relevant,"The abstract describes the use of prompting a large language model to generate OOD examples, which directly relates to the topic of prompt engineering." the student becomes the master matching gpt3 on scientific factual error correction,"['D. Ashok', 'Atharva Kulkarni', 'Hai Pham', 'B. Póczos']",https://arxiv.org/pdf/2305.14707,,,"Due to the prohibitively high cost of creating error correction datasets, most Factual Claim Correction methods rely on a powerful verification model to guide the correction process. This leads to a significant drop in performance in domains like Scientific Claim Correction, where good verification models do not always exist. In this work, we introduce a claim correction system that makes no domain assumptions and does not require a verifier but is able to outperform existing methods by an order of magnitude — achieving 94% correction accuracy on the SciFact dataset, and 62.5% on the SciFact-Open dataset, compared to the next best meth-ods 0.5% and 1.50% respectively. Our method leverages the power of prompting with LLMs during training to create a richly annotated dataset that can be used for fully supervised training and regularization. We additionally use a claim-aware decoding procedure to improve the quality of corrected claims. Our method is competitive with the very LLM that was used to generate the annotated dataset — with GPT3.5 achieving 89.5% and 60% correction accuracy on SciFact and SciFact-Open, despite using 1250 times as many parameters as our model.",80ae1347b2dda02748f8f09da8a738121f5edfb5,Semantic Scholar,,somewhat relevant,"The paper mentions using prompts to large language models (LLMs) for labeling, which is relevant to prompt engineering." detecting phishing sites using chatgpt,"['Takashi Koide', 'Naoki Fukushi', 'Hiroki Nakano', 'Daiki Chiba']",http://arxiv.org/pdf/2306.05816,2023-06-09,,"The emergence of Large Language Models (LLMs), including ChatGPT, is having a significant impact on a wide range of fields. While LLMs have been extensively researched for tasks such as code generation and text synthesis, their application in detecting malicious web content, particularly phishing sites, has been largely unexplored. To combat the rising tide of cyber attacks due to the misuse of LLMs, it is important to automate detection by leveraging the advanced capabilities of LLMs. In this paper, we propose a novel system called ChatPhishDetector that utilizes LLMs to detect phishing sites. Our system involves leveraging a web crawler to gather information from websites, generating prompts for LLMs based on the crawled data, and then retrieving the detection results from the responses generated by the LLMs. The system enables us to detect multilingual phishing sites with high accuracy by identifying impersonated brands and social engineering techniques in the context of the entire website, without the need to train machine learning models. To evaluate the performance of our system, we conducted experiments on our own dataset and compared it with baseline systems and several LLMs. The experimental results using GPT-4V demonstrated outstanding performance, with a precision of 98.7% and a recall of 99.6%, outperforming the detection results of other LLMs and existing systems. These findings highlight the potential of LLMs for protecting users from online fraudulent activities and have important implications for enhancing cybersecurity measures.",838b2f66aa07dd97a473be59921e2cd7d39461e2,Semantic Scholar,,somewhat relevant,"The paper discusses the use of large language models (LLMs) given brief prompts to synthesize information, which aligns with the study of prompt engineering, particularly in the context of generating content based on prompts." more than you've asked for a comprehensive analysis of novel prompt injection threats to applicationintegrated large language models,"['Kai Greshake', 'Sahar Abdelnabi', 'Shailesh Mishra', 'C. Endres', 'Thorsten Holz', 'Mario Fritz']",http://arxiv.org/pdf/2302.12173,,,"We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting . Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following . So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs ) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viabil-ity of our attacks, we implemented specific demonstrations",8fdd34153d1035d09dd4a6efa9cb0c91d23d0045,Semantic Scholar,,highly relevant,"The paper discusses prompt-tuning large language models with small datasets for agile text classification, directly aligning with hard prefix prompt engineering techniques." exploring the path from instructions to rewards with large language models in instancebased learning,"['Chase McDonald', 'Tyler Malloy', 'Thuy Ngoc Nguyen', 'Cleotilde Gonzalez']",https://ojs.aaai.org/index.php/AAAI-SS/article/download/27697/27470,2024-01-22,,"A prominent method to model human learning is through experiential learning, where decisions are influenced by the outcomes observed in previous actions. The decisions-from-experience approach often excludes other forms of learning in humans, such as learning from descriptive information. In humans, descriptive information can enhance learning by providing a denser signal, achieved through understanding the relationship between intermediate decisions and their future outcomes, instead of relying solely on observed outcomes. To account for experiential and descriptive information, we propose the use of large language models (LLMs) to convert descriptive information into dense signals that can be used by computational models that learn from experience. Building on past work in cognitive modeling, we utilize task instructions and prompt an LLM to define and quantify the critical actions an agent must take to succeed in the task. In an initial experiment, we test this approach using an Instance-Based Learning cognitive model of experiential decisions in a gridworld task. We demonstrate how the LLM can be prompted to provide a series of actions and relative values given the task instructions, then show how these values can be used in place of sparse outcome signals to improve the model’s learning of the task significantly.",92a55a027f77312492eaf379aadcf290d1094828,Semantic Scholar,,highly relevant,"The paper discusses generating discriminative prompts with large language models to improve zero-shot classification, which is a direct application of prompt engineering." concise and organized perception facilitates large language models for deductive reasoning,"['Shaotian Yan', 'Chen Shen', 'Junjie Liu', 'Jieping Ye']",https://arxiv.org/pdf/2310.03309,2023-10-05,,"Exploiting large language models (LLMs) to tackle deductive reasoning has garnered growing attention. It still remains highly challenging to achieve satisfactory results in complex deductive problems, characterized by plenty of premises (i.e., facts or rules) entailing intricate relationships among entities and requiring multi-hop reasoning. One intuitive solution is to decompose the original task into smaller sub-tasks, and then chain the multiple casual reasoning steps together in a forward (e.g., Selection-Inference) or backward (e.g., LAMBADA) direction. However, these techniques inevitably necessitate a large number of overall stages, leading to computationally expensive operations and a higher possibility of making misleading steps. In addition to stage-by-stage decomposition, we draw inspiration from another aspect of human problem-solving. Humans tend to distill the most relevant information and organize their thoughts systematically (e.g., creating mind maps), which assists them in answering questions or drawing conclusions precisely and quickly. In light of this, we propose a novel reasoning approach named Concise and Organized Perception (COP). COP carefully analyzes the given statements to efficiently identify the most pertinent information while eliminating redundancy. It then prompts the LLMs in a more organized form that adapts to the model's inference process. By perceiving concise and organized proofs, the deductive reasoning abilities of LLMs can be better elicited, and the risk of acquiring errors caused by excessive reasoning stages is mitigated. Furthermore, our approach can be combined with the aforementioned ones to further boost their performance. Extensive experimental results on three popular deductive benchmarks (i.e., ProofWriter, PrOntoQA and PrOntoQA-OOD) show that COP significantly outperforms previous state-of-the-art methods.",96e265e5de378f89a162981cd1c3eafa7b6f1d30,Semantic Scholar,,somewhat relevant,"The paper focuses on generative script learning utilizing large language models for knowledge prompting, which aligns with the concept of prompt engineering despite not explicitly mentioning hard prefix prompts." breaking language barriers with a leap learning strategies for polyglot llms,"['A. Nambi', 'Vaibhav Balloli', 'M. Ranjit', 'T. Ganu', 'Kabir Ahuja', 'Sunayana Sitaram', 'Kalika Bali']",http://arxiv.org/pdf/2305.17740,2023-05-28,,"Large language models (LLMs) are at the forefront of transforming numerous domains globally. However, their inclusivity and effectiveness remain limited for non-Latin scripts and low-resource languages. This paper tackles the imperative challenge of enhancing the multilingual performance of LLMs, specifically focusing on Generative models. Through systematic investigation and evaluation of diverse languages using popular question-answering (QA) datasets, we present novel techniques that unlock the true potential of LLMs in a polyglot landscape. Our approach encompasses three key strategies that yield remarkable improvements in multilingual proficiency. First, by meticulously optimizing prompts tailored for polyglot LLMs, we unlock their latent capabilities, resulting in substantial performance boosts across languages. Second, we introduce a new hybrid approach that synergizes GPT generation with multilingual embeddings and achieves significant multilingual performance improvement on critical tasks like QA and retrieval. Finally, to further propel the performance of polyglot LLMs, we introduce a novel learning algorithm that dynamically selects the optimal prompt strategy, LLM model, and embeddings per query. This dynamic adaptation maximizes the efficacy of LLMs across languages, outperforming best static and random strategies. Our results show substantial advancements in multilingual understanding and generation across a diverse range of languages.",9b71c89686334ba4f1247aa18990740a94e25cc3,Semantic Scholar,,highly relevant,"The paper focuses on developing and applying an optimized meta-prompt for large-scale language models specifically for abstract classification, indicating direct involvement with prompt engineering." boosting language models reasoning with chainofknowledge prompting,"['J. Wang', 'Qiushi Sun', 'Nuo Chen', 'Xiang Lorraine Li', 'Ming Gao']",https://arxiv.org/pdf/2306.06427,2023-06-10,,"Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like ``Let's think step by step'' or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) to generate intermediate reasoning steps. However, the generated rationales often come with mistakes, making unfactual and unfaithful reasoning chains. To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple. This is inspired by our human behaviors, i.e., we can draw a mind map or knowledge map as the reasoning evidence in the brain before answering a complex question. Benefiting from CoK, we additionally introduce a F^2-Verification method to estimate the reliability of the reasoning chains in terms of factuality and faithfulness. For the unreliable response, the wrong evidence can be indicated to prompt the LLM to rethink. Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.",9efa81ec4954b0859c47dad8f42edfaf8bced69b,Semantic Scholar,,highly relevant,"The paper discusses SOCRATIC QUESTIONING, an algorithm that improves the prompting process with large language models by recursively tackling sub-questions, which is a direct application of prompt engineering." susceptibility to influence of large language models,"['L. D. Griffin', 'Bennett Kleinberg', 'Maximilian Mozes', 'Kimberly T. Mai', 'Maria Vau', 'M. Caldwell', 'Augustine Marvor-Parker']",http://arxiv.org/pdf/2303.06074,2023-03-10,,"Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement (through, for example, rating its interest) boosts a later truthfulness test rating. Data was collected from 1000 human participants using an online experiment, and 1000 simulated participants using engineered prompts and LLM completion. 64 ratings per participant were collected, using all exposure-test combinations of the attributes: truth, interest, sentiment and importance. The results for human participants reconfirmed the ITE, and demonstrated an absence of effect for attributes other than truth, and when the same attribute is used for exposure and test. The same pattern of effects was found for LLM-simulated participants. The second study concerns a specific mode of influence - populist framing of news to increase its persuasion and political mobilization. Data from LLM-simulated participants was collected and compared to previously published data from a 15-country experiment on 7286 human participants. Several effects previously demonstrated from the human study were replicated by the simulated study, including effects that surprised the authors of the human study by contradicting their theoretical expectations (anti-immigrant framing of news decreases its persuasion and mobilization); but some significant relationships found in human data (modulation of the effectiveness of populist framing according to relative deprivation of the participant) were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.",ab90169f7213482efff246cc5f5f057351265f18,Semantic Scholar,,highly relevant,"The paper directly discusses the use of specific prompts to aid in the reasoning process of large language models, aligning with the topic of hard prefix prompt engineering." zerotop zeroshot taskoriented semantic parsing using large language models,"['Dheeraj Mekala', 'J. Wolfe', 'Subhro Roy']",http://arxiv.org/pdf/2212.10815,2022-12-21,,"We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.",b8d06dd769f89d08bdd9997d7bd363c89ede845b,Semantic Scholar,,highly relevant,"The paper focuses on using in-context prompting and adversarial prompting with GPT-4 for hypothesis generation in astronomy, which is directly related to the implementation and effects of prompt engineering." can large language models write good propertybased tests,"['Vasudev Vikram', 'Caroline Lemieux', 'Rohan Padhye']",https://arxiv.org/pdf/2307.04346,2023-07-10,,"Property-based testing (PBT), while an established technique in the software testing research community, is still relatively underused in real-world software. Pain points in writing property-based tests include implementing diverse random input generators and thinking of meaningful properties to test. Developers, however, are more amenable to writing documentation; plenty of library API documentation is available and can be used as natural language specifications for property-based tests. As large language models (LLMs) have recently shown promise in a variety of coding tasks, we explore the potential of using LLMs to synthesize property-based tests. We call our approach PBT-GPT, and propose three different strategies of prompting the LLM for PBT. We characterize various failure modes of PBT-GPT and detail an evaluation methodology for automatically synthesized property-based tests. PBT-GPT achieves promising results in our preliminary studies on sample Python library APIs in $\texttt{numpy}$, $\texttt{networkx}$, and $\texttt{datetime}$.",c1996c3d4f289e613d4a44d04bb1c1c0fca80460,Semantic Scholar,,somewhat relevant,"The paper focuses on Chain-of-Thought-Prompting in Large Language Models, indicating the exploration of prompting techniques, yet it does not explicitly mention hard prefix prompts." large language models as batteriesincluded zeroshot esco skills matchers,"['Benjamin Clavié', ""Guillaume Souli'e""]",https://arxiv.org/pdf/2307.03539,2023-07-07,,"Understanding labour market dynamics requires accurately identifying the skills required for and possessed by the workforce. Automation techniques are increasingly being developed to support this effort. However, automatically extracting skills from job postings is challenging due to the vast number of existing skills. The ESCO (European Skills, Competences, Qualifications and Occupations) framework provides a useful reference, listing over 13,000 individual skills. However, skills extraction remains difficult and accurately matching job posts to the ESCO taxonomy is an open problem. In this work, we propose an end-to-end zero-shot system for skills extraction from job descriptions based on large language models (LLMs). We generate synthetic training data for the entirety of ESCO skills and train a classifier to extract skill mentions from job posts. We also employ a similarity retriever to generate skill candidates which are then re-ranked using a second LLM. Using synthetic data achieves an RP@10 score 10 points higher than previous distant supervision approaches. Adding GPT-4 re-ranking improves RP@10 by over 22 points over previous methods. We also show that Framing the task as mock programming when prompting the LLM can lead to better performance than natural language prompts, especially with weaker LLMs. We demonstrate the potential of integrating large language models at both ends of skills matching pipelines. Our approach requires no human annotations and achieve extremely promising results on skills extraction against ESCO.",c4f9f0cc8c138047a61bdb11b1a352e3d1aed035,Semantic Scholar,,highly relevant,"The paper uses prompting as a technique for classifying tabular data by serializing it to a natural-language string for input into a large language model, which aligns with the concept of prompt engineering." an empirical study on using large language models to analyze software supply chain security failures,"['Tanmay Singla', 'Dharun Anandayuvaraj', 'Kelechi G. Kalu', 'Taylor R. Schorlemmer', 'James C. Davis']",https://dl.acm.org/doi/pdf/10.1145/3605770.3625214,2023-08-09,,"As we increasingly depend on software systems, the consequences of breaches in the software supply chain become more severe. High-profile cyber attacks like SolarWinds and ShadowHammer have resulted in significant financial and data losses, underlining the need for stronger cybersecurity. One way to prevent future breaches is by studying past failures. However, traditional methods of analyzing past failures require manually reading and summarizing reports about them. Automated support could reduce costs and allow analysis of more failures. Natural Language Processing (NLP) techniques such as Large Language Models (LLMs) could be leveraged to assist the analysis of failures. In this study, we assessed the ability of Large Language Models (LLMs) to analyze historical software supply chain breaches. We used LLMs to replicate the manual analysis of 69 software supply chain security failures performed by members of the Cloud Native Computing Foundation (CNCF). We developed prompts for LLMs to categorize these by four dimensions: type of compromise, intent, nature, and impact. GPT 3.5's categorizations had an average accuracy of 68% and Bard's had an accuracy of 58% over these dimensions. We report that LLMs effectively characterize software supply chain failures when the source articles are detailed enough for consensus among manual analysts, but cannot yet replace human analysts. Future work can improve LLM performance in this context, and study a broader range of articles and failures.",c91f6eb320c70e2f64b6fb935494978a8699f06a,Semantic Scholar,,highly relevant,"The paper discusses employing 'Self-Critique prompting' in LLMs to improve their response to user inputs, directly relating to the application and manipulation of prompts." actiongpt leveraging largescale language models for improved and generalized action generation,"['Sai Shashank Kalakonda', 'Shubh Maheshwari', 'Ravi Kiran Sarvadevabhatla']",https://arxiv.org/pdf/2211.15603,2022-11-28,,"We introduce Action-GPT, a plug-and-play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. We introduce a generic approach compatible with stochastic (e.g. VAE-based) and deterministic (e.g. MotionCLIP) text-to-motion models. In addition, the approach enables multiple text descriptions to be utilized. Our experiments show (i) noticeable qualitative and quantitative improvement in the quality of synthesized motions, (ii) benefits of utilizing multiple LLM-generated descriptions, (iii) suitability of the prompt function, and (iv) zero-shot generation capabilities of the proposed approach. Code and pretrained models are available at https://actiongpt.github.io.",cb2954127a7fce8ab84486765392ce95dcdd8175,Semantic Scholar,,highly relevant,"The paper discusses chain-of-thought (CoT) prompting, a technique in prompt engineering, focusing on the annotation and validation of explanations generated by language models, which is directly related to prompt engineering." rlaif scaling reinforcement learning from human feedback with ai feedback,"['Harrison Lee', 'Samrat Phatale', 'Hassan Mansoor', 'Kellie Lu', 'Thomas Mesnard', 'Colton Bishop', 'Victor Carbune', 'Abhinav Rastogi']",https://arxiv.org/pdf/2309.00267,2023-09-01,,"Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences. However, gathering high-quality human preference labels can be a time-consuming and expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al., offers a promising alternative that leverages a powerful off-the-shelf LLM to generate preferences in lieu of human annotators. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, RLAIF achieves comparable or superior performance to RLHF, as rated by human evaluators. Furthermore, RLAIF demonstrates the ability to outperform a supervised fine-tuned baseline even when the LLM preference labeler is the same size as the policy. In another experiment, directly prompting the LLM for reward scores achieves superior performance to the canonical RLAIF setup, where LLM preference labels are first distilled into a reward model. Finally, we conduct extensive studies on techniques for generating aligned AI preferences. Our results suggest that RLAIF can achieve human-level performance, offering a potential solution to the scalability limitations of RLHF.",cb587eaea753ee38013afb7e5b6bc8fba1248d04,Semantic Scholar,,highly relevant,"The paper directly discusses the development and application of a specific prompt to adapt recipes, which falls under the category of prompt engineering." cuecot chainofthought prompting for responding to indepth dialogue questions with llms,"['Hongru Wang', 'Rui Wang', 'Fei Mi', 'Yang Deng', 'Zezhong Wang', 'Bin Liang', 'Ruifeng Xu', 'Kam-Fai Wong']",https://aclanthology.org/2023.findings-emnlp.806.pdf,2023-05-19,,"Large Language Models (LLMs), such as \texttt{ChatGPT}, greatly empower dialogue systems with strong language understanding and generation capabilities. However, most of the previous works prompt the LLMs to directly generate a response based on the dialogue context, overlooking the underlying linguistic cues about the user status exhibited in the context. Such in-depth dialogue scenarios are challenging for existing LLMs to figure out the user's hidden needs and respond satisfactorily through a single-step inference. To this end, we propose a novel linguistic cue-based chain-of-thoughts (\textit{Cue}-CoT), which enhances the LLMs inference with an intermediate reasoning step to find cues exhibited in the dialogue, aiming to provide a more personalized and engaging response. To evaluate the approach, we build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English, targeting 3 major linguistic cues during the conversation: \textit{personality}, \textit{emotion}, and \textit{psychology}. We conduct extensive experiments on the proposed benchmark with 5 LLMs under both zero-shot and one-shot settings. Empirical results demonstrate our proposed \textit{Cue}-CoT method outperforms standard prompting methods in terms of both \textit{helpfulness} and \textit{acceptability} on all datasets.",d0c69c309fbf1233b6351cd57484557c16f28427,Semantic Scholar,,highly relevant,"The paper discusses employing prompting with explanations in the context of few-shot learning, which implies the use of hard prefix prompts in instructing GPT-3, relevant to prompt engineering." soft prompt tuning for augmenting dense retrieval with large language models,"['Zhiyuan Peng', 'Xuyang Wu', 'Yihan Fang']",https://arxiv.org/pdf/2307.08303,2023-07-17,,"Dense retrieval (DR) converts queries and documents into dense embeddings and measures the similarity between queries and documents in vector space. One of the challenges in DR is the lack of domain-specific training data. While DR models can learn from large-scale public datasets like MS MARCO through transfer learning, evidence shows that not all DR models and domains can benefit from transfer learning equally. Recently, some researchers have resorted to large language models (LLMs) to improve the zero-shot and few-shot DR models. However, the hard prompts or human-written prompts utilized in these works cannot guarantee the good quality of generated weak queries. To tackle this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task, we leverage soft prompt-tuning to optimize a task-specific soft prompt on limited ground truth data and then prompt the LLMs to tag unlabeled documents with weak queries, yielding enough weak document-query pairs to train task-specific dense retrievers. We design a filter to select high-quality example document-query pairs in the prompt to further improve the quality of weak tagged queries. To the best of our knowledge, there is no prior work utilizing soft prompt tuning to augment DR models. The experiments demonstrate that SPTAR outperforms the unsupervised baselines BM25 and the recently proposed LLMs-based augmentation method for DR.",d44031f253668c61ac6d68b95bbe9cac57730d51,Semantic Scholar,,somewhat relevant,"The paper discusses enhancing retrieval mechanisms for selecting prompts for LLMs, which is a component of prompt engineering." on the planning abilities of large language models a critical investigation,"['Karthik Valmeekam', 'Matthew Marquez', 'S. Sreedharan', 'Subbarao Kambhampati']",http://arxiv.org/pdf/2305.15771,2023-05-25,,"Intrigued by the claims of emergent reasoning capabilities in LLMs trained on general web corpora, in this paper, we set out to investigate their planning capabilities. We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs as a source of heuristic guidance for other agents (AI planners) in their planning tasks. We conduct a systematic study by generating a suite of instances on domains similar to the ones employed in the International Planning Competition and evaluate LLMs in two distinct modes: autonomous and heuristic. Our findings reveal that LLMs' ability to generate executable plans autonomously is rather limited, with the best model (GPT-4) having an average success rate of ~12% across the domains. However, the results in the heuristic mode show more promise. In the heuristic mode, we demonstrate that LLM-generated plans can improve the search process for underlying sound planners and additionally show that external verifiers can help provide feedback on the generated plans and back-prompt the LLM for better plan generation.",dedfe929d182cc3537a9ed765d589b4735ce062a,Semantic Scholar,,highly relevant,"The paper discusses guiding LLMs like ChatGPT in generating distractors for MCQs by prompting with automatically retrieved question items and in-context examples, aligning with the concept of hard prefix prompting." an empirical study of the code generation of safetycritical software using llms,"['Mingxing Liu', 'Junfeng Wang', 'Tao Lin', 'Quan Ma', 'Zhiyang Fang', 'Yanqun Wu']",https://www.mdpi.com/2076-3417/14/3/1046/pdf?version=1706237314,2024-01-26,,"In the digital era of increasing software complexity, improving the development efficiency of safety-critical software is a challenging task faced by academia and industry in domains such as nuclear energy, aviation, the automotive industry, and rail transportation. Recently, people have been excited about using pre-trained large language models (LLMs) such as ChatGPT and GPT-4 to generate code. Professionals in the safety-critical software field are intrigued by the code generation capabilities of LLMs. However, there is currently a lack of systematic case studies in this area. Aiming at the need for automated code generation in safety-critical domains such as nuclear energy and the automotive industry, this paper conducts a case study on generating safety-critical software code using GPT-4 as the tool. Practical engineering cases from the industrial domain are employed. We explore different approaches, including code generation based on overall requirements, specific requirements, and augmented prompts. We propose a novel prompt engineering method called Prompt-FDC that integrates basic functional requirements, domain feature generalization, and domain constraints. This method improves code completeness from achieving 30% functions to 100% functions, increases the code comment rate to 26.3%, and yields better results in terms of code compliance, readability, and maintainability. The code generation approach based on LLMs also introduces a new software development process and V-model lifecycle for safety-critical software. Through systematic case studies, we demonstrate that, with appropriate prompt methods, LLMs can auto-generate safety-critical software code that meets practical engineering application requirements. It is foreseeable that LLMs can be applied to various engineering domains to improve software safety and development efficiency.",e611a540abfbcaa2920940ab3729840112a513c7,Semantic Scholar,,highly relevant,"The paper directly addresses the use and impact of different types of prompts (diegetic and non-diegetic) in writing with Large Language Models, aligning with the study of prompt engineering." towards languageguided interactive 3d generation llms as layout interpreter with generative feedback,"['Yiqi Lin', 'Hao Wu', 'Ruichen Wang', 'H. Lu', 'Xiaodong Lin', 'Hui Xiong', 'Lin Wang']",http://arxiv.org/pdf/2305.15808,2023-05-25,,"Generating and editing a 3D scene guided by natural language poses a challenge, primarily due to the complexity of specifying the positional relations and volumetric changes within the 3D space. Recent advancements in Large Language Models (LLMs) have demonstrated impressive reasoning, conversational, and zero-shot generation abilities across various domains. Surprisingly, these models also show great potential in realizing and interpreting the 3D space. In light of this, we propose a novel language-guided interactive 3D generation system, dubbed LI3D, that integrates LLMs as a 3D layout interpreter into the off-the-shelf layout-to-3D generative models, allowing users to flexibly and interactively generate visual content. Specifically, we design a versatile layout structure base on the bounding boxes and semantics to prompt the LLMs to model the spatial generation and reasoning from language. Our system also incorporates LLaVA, a large language and vision assistant, to provide generative feedback from the visual aspect for improving the visual quality of generated content. We validate the effectiveness of LI3D, primarily in 3D generation and editing through multi-round interactions, which can be flexibly extended to 2D generation and editing. Various experiments demonstrate the potential benefits of incorporating LLMs in generative AI for applications, e.g., metaverse. Moreover, we benchmark the layout reasoning performance of LLMs with neural visual artist tasks, revealing their emergent ability in the spatial layout domain.",ef8c21e1f574495f0c80b8c1037dbdb886f0808d,Semantic Scholar,,highly relevant,"The study focuses on augmenting prompts using a genetic algorithm to optimize performance, directly involving prompt engineering techniques." contextfaithful prompting for large language models,"['Wenxuan Zhou', 'Sheng Zhang', 'Hoifung Poon', 'Muhao Chen']",http://arxiv.org/pdf/2303.11315,2023-03-20,,"Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts. Code and data are released at https://github.com/wzhouad/context-faithful-llm.",12c826f4195da172b212a529f8fcf10cc79e35da,Semantic Scholar,,highly relevant,"The paper describes the use of iterative prompt optimization with GPT-3.5-Turbo for extracting metabolic networks from literature, indicating the use of hard prefix prompts in the process." conal anticipating outliers with large language models,"['Albert Xu', 'Xiang Ren', 'Robin Jia']",http://arxiv.org/pdf/2211.15718,,,"In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on OOD examples. To remedy this overconfidence, we introduce Contrastive Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate relevant novel labels, then generate examples from each novel class matching the task format. Second, we train our classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CoNAL, classifiers improve in their ability to detect and abstain on OOD examples over prior methods by an average of 2.3% AUAC and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy.1",19da40fd01c711fb2b3b0b19b3956b86b75f575d,Semantic Scholar,,highly relevant,"The paper directly addresses prompt engineering, focusing on optimizing prompts for specific Large Language Models to improve performance on downstream tasks." xparade crosslingual textual entailment and information divergence across paragraphs,"['Juan Diego Rodriguez', 'Katrin Erk', 'Greg Durrett']",https://arxiv.org/pdf/2309.08873,2023-09-16,,"Understanding when two pieces of text convey the same information is a goal touching many subproblems in NLP, including textual entailment and fact-checking. This problem becomes more complex when those two pieces of text are in different languages. Here, we introduce X-PARADE (Cross-lingual Paragraph-level Analysis of Divergences and Entailments), the first cross-lingual dataset of paragraph-level information divergences. Annotators label a paragraph in a target language at the span level and evaluate it with respect to a corresponding paragraph in a source language, indicating whether a given piece of information is the same, new, or new but can be inferred. This last notion establishes a link with cross-language NLI. Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild. Armed with our dataset, we investigate a diverse set of approaches for this problem, including classic token alignment from machine translation, textual entailment methods that localize their decisions, and prompting of large language models. Our results show that these methods vary in their capability to handle inferable information, but they all fall short of human performance.",300b01dc726fe8acbededd805501811d427920bd,Semantic Scholar,,highly relevant,"The paper is focused on textual prompt optimization for black-box models, which directly relates to the topic of prompt engineering." towards agile text classifiers for everyone,"['Maximilian Mozes', 'Jessica Hoffmann', 'K. Tomanek', 'Muhamed Kouate', 'Nithum Thain', 'Ann Yuan', 'Tolga Bolukbasi', 'Lucas Dixon']",http://arxiv.org/pdf/2302.06541,2023-02-13,,"Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper introduces and evaluates methods for agile text classification, whereby classifiers are trained using small, targeted datasets that can be quickly developed for a particular policy. Experimenting with 7 datasets from three safety-related domains, comprising 15 annotation schemes, led to our key finding: prompt-tuning large language models, like PaLM 62B, with a labeled dataset of as few as 80 examples can achieve state-of-the-art performance. We argue that this enables a paradigm shift for text classification, especially for models supporting safer online discourse. Instead of collecting millions of examples to attempt to create universal safety classifiers over months or years, classifiers could be tuned using small datasets, created by individuals or small organizations, tailored for specific use cases, and iterated on and adapted in the time-span of a day.",335303a513e376b120212337c154cb91fa3689db,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of prompt engineering for generating health awareness messages, indicating direct relevance to the topic." hierarchical prompting assists large language model on web navigation,"['Abishek Sridhar', 'Robert Lo', 'Frank F. Xu', 'Hao Zhu', 'Shuyan Zhou']",http://arxiv.org/pdf/2305.14257,2023-05-23,,"Large language models (LLMs) struggle on processing complicated observations in interactive decision making tasks. To alleviate this issue, we propose a simple hierarchical prompting approach. Diverging from previous prompting approaches that always put the full observation (e.g. a web page) to the prompt, we propose to first construct an action-aware observation which is more condensed and relevant with a dedicated SUMMARIZER prompt. The ACTOR prompt then predicts the next action based on the summarized observation. While our method has broad applicability, we particularly demonstrate its efficacy in the complex domain of web navigation where a full observation often contains redundant and irrelevant information. Our approach outperforms the previous state-of-the-art prompting mechanics by 6.2% on task success rate, demonstrating its potential on interactive decision making tasks with long observation traces.",3d8e6358968c8bd5e97f21fead73bf4ba0c2a8d7,Semantic Scholar,,highly relevant,"The paper mentions the use of a 'bespoke, zero-shot prompt' with GPT-4 for extracting information, indicating it discusses prompt engineering techniques." towards realistic zeroshot classification via self structural semantic alignment,"['Shengxiang Zhang', 'Muzammal Naseer', 'Guangyi Chen', 'Zhiqiang Shen', 'Salman A. Khan', 'Kun Zhang', 'F. Khan']",https://arxiv.org/pdf/2308.12960,2023-08-24,,"Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification. Despite the success, most traditional VLMs-based methods are restricted by the assumption of partial source supervision or ideal vocabularies, which rarely satisfy the open-world scenario. In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary. To address this challenge, we propose the Self Structural Semantic Alignment (S^3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously self-learning. Our S^3A framework adopts a unique Cluster-Vote-Prompt-Realign (CVPR) algorithm, which iteratively groups unlabeled data to derive structural semantics for pseudo-supervision. Our CVPR process includes iterative clustering on images, voting within each cluster to identify initial class candidates from the vocabulary, generating discriminative prompts with large language models to discern confusing candidates, and realigning images and the vocabulary as structural semantic alignment. Finally, we propose to self-learn the CLIP image encoder with both individual and structural semantic alignment through a teacher-student learning strategy. Our comprehensive experiments across various generic and fine-grained benchmarks demonstrate that the S^3A method offers substantial improvements over existing VLMs-based approaches, achieving a more than 15% accuracy improvement over CLIP on average. Our codes, models, and prompts are publicly released at https://github.com/sheng-eatamath/S3A.",437cfee2a7f7beadf09ad712f71b3265740e44a0,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of both zero-shot and few-shot chaining of thought as prompting approaches, indicating active prompt engineering to analyze the performance of GPT-4 in evaluating tutor feedback." interacting with large language models a case study on aiaided brainstorming for guesstimation problems,"['Vildan Salikutluk', 'Dorothea Koert', 'F. Jäkel']",https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA230081,,,". Designing cooperative AI-systems that do not automate tasks but rather aid human cognition is challenging and requires human-centered design approaches. Here, we introduce AI-aided brainstorming for solving guesstimation problems, i.e. estimating quantities from incomplete information, as a testbed for human-AI interaction with large language models (LLMs). In a think-aloud study, we found that humans decompose guesstimation questions into sub-questions and often replace them with semantically related ones. If they fail to brainstorm related questions, they often get stuck and do not find a solution. Therefore, to support this brainstorming process, we prompted a large language model (GPT-3) with successful replacements from our think-aloud data. In follow-up studies, we tested whether the availability of this tool improves participants’ answers. While the tool successfully produced human-like suggestions, participants were reluctant to use it. From our findings, we conclude that for human-AI interaction with LLMs to be successful AI-systems must complement rather than mimic a user’s associations.",4f9e7eb2f009e30f15eca18f4e540915b637b603,Semantic Scholar,,highly relevant,"The paper describes the use of prompting methods within a multilingual learning framework, directly relating to prompt engineering." multiscript multimodal script learning for supporting open domain everyday tasks,"['Jingyuan Qi', 'Minqian Liu', 'Ying Shen', 'Zhiyang Xu', 'Lifu Huang']",https://arxiv.org/pdf/2310.04965,2023-10-08,,"Automatically generating scripts (i.e. sequences of key steps described in text) from video demonstrations and reasoning about the subsequent steps are crucial to the modern AI virtual assistants to guide humans to complete everyday tasks, especially unfamiliar ones. However, current methods for generative script learning rely heavily on well-structured preceding steps described in text and/or images or are limited to a certain domain, resulting in a disparity with real-world user scenarios. To address these limitations, we present a new benchmark challenge -- MultiScript, with two new tasks on task-oriented multimodal script learning: (1) multimodal script generation, and (2) subsequent step prediction. For both tasks, the input consists of a target task name and a video illustrating what has been done to complete the target task, and the expected output is (1) a sequence of structured step descriptions in text based on the demonstration video, and (2) a single text description for the subsequent step, respectively. Built from WikiHow, MultiScript covers multimodal scripts in videos and text descriptions for over 6,655 human everyday tasks across 19 diverse domains. To establish baseline performance on MultiScript, we propose two knowledge-guided multimodal generative frameworks that incorporate the task-related knowledge prompted from large language models such as Vicuna. Experimental results show that our proposed approaches significantly improve over the competitive baselines.",5ece96203cd1dc9ff3f99867faa451939d86d545,Semantic Scholar,,highly relevant,"The paper mentions doing prompt engineering for zero-shot/few-shot learning with ChatGPT and GPT-4 models, directly relating to the topic of prompt engineering." development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews,"['Y. Kataoka', 'R. So', 'M. Banno', 'J. Kumasawa', 'H. Someko', 'S. Taito', 'T. Terasawa', 'Y. Tsujimoto', 'Y. Tsutsumi', 'Y. Wada', 'T. A. Furukawa']",https://www.medrxiv.org/content/medrxiv/early/2023/10/31/2023.10.31.23297818.full.pdf,2023-11-01,,"Systematic reviews (SRs) are a critical component of evidence-based medicine, but the process of screening titles and abstracts is time-consuming. This study aimed to develop and externally validate a method using large language models to classify abstracts for diagnostic test accuracy (DTA) systematic reviews, thereby reducing the human workload. We used a previously collected dataset for developing DTA abstract classifiers and applied prompt engineering. We developed an optimized meta-prompt for Generative Pre-trained Transformer (GPT)-3.5-turbo and GPT-4 to classify abstracts. In the external validation dataset 1, the prompt with GPT-3.5 turbo showed a sensitivity of 0.988, and a specificity of 0.298. GPT-4 showed a sensitivity of 0.982, and a specificity of 0.677. In the external validation dataset 2, GPT-3.5 turbo showed a sensitivity of 0.919, and a specificity of 0.434. GPT-4 showed a sensitivity of 0.806, and a specificity of 0.740. If we included eligible studies from among the references of the identified studies, GPT-3.5 turbo had no critical misses, while GPT-4 had some misses. Our study indicates that GPT-3.5 turbo can be effectively used to classify abstracts for DTA systematic reviews. Further studies using other dataset are warranted to confirm our results. Additionally, we encourage the use of our framework and publicly available dataset for further exploration of more effective classifiers using other LLMs and prompts (https://github.com/youkiti/ARE/).",6384921f1bd1059c6b4c37ac3c4e4f19e45d40c1,Semantic Scholar,,highly relevant,"The paper directly investigates the impact of prompt programming (engineering) on the fine-tuning performance of language models, which falls squarely within the realm of hard prefix prompting." langrasp using large language models for semantic object grasping,"['Reihaneh Mirjalili', 'Michael Krawez', 'Simone Silenzi', 'Yannik Blei', 'Wolfram Burgard']",https://arxiv.org/pdf/2310.05239,2023-10-08,,"In this paper, we propose LAN-grasp, a novel approach towards more appropriate semantic grasping. We use foundation models to provide the robot with a deeper understanding of the objects, the right place to grasp an object, or even the parts to avoid. This allows our robot to grasp and utilize objects in a more meaningful and safe manner. We leverage the combination of a Large Language Model, a Vision Language Model, and a traditional grasp planner to generate grasps demonstrating a deeper semantic understanding of the objects. We first prompt the Large Language Model about which object part is appropriate for grasping. Next, the Vision Language Model identifies the corresponding part in the object image. Finally, we generate grasp proposals in the region proposed by the Vision Language Model. Building on foundation models provides us with a zero-shot grasp method that can handle a wide range of objects without the need for further training or fine-tuning. We evaluated our method in real-world experiments on a custom object data set. We present the results of a survey that asks the participants to choose an object part appropriate for grasping. The results show that the grasps generated by our method are consistently ranked higher by the participants than those generated by a conventional grasping planner and a recent semantic grasping approach.",894b2fe365642d350e0d688c33ba65124b1c2979,Semantic Scholar,,highly relevant,"The abstract explicitly mentions the need for carefully designed prompts in utilizing ChatGPT's capabilities, indicating the relevance of prompt engineering." prompt tuning large language models on personalized aspect extraction for recommendations,"['Pan Li', 'Yuyan Wang', 'Ed H. Chi', 'Minmin Chen']",http://arxiv.org/pdf/2306.01475,2023-06-02,,"Existing aspect extraction methods mostly rely on explicit or ground truth aspect information, or using data mining or machine learning approaches to extract aspects from implicit user feedback such as user reviews. It however remains under-explored how the extracted aspects can help generate more meaningful recommendations to the users. Meanwhile, existing research on aspect-based recommendations often relies on separate aspect extraction models or assumes the aspects are given, without accounting for the fact the optimal set of aspects could be dependent on the recommendation task at hand. In this work, we propose to combine aspect extraction together with aspect-based recommendations in an end-to-end manner, achieving the two goals together in a single framework. For the aspect extraction component, we leverage the recent advances in large language models and design a new prompt learning mechanism to generate aspects for the end recommendation task. For the aspect-based recommendation component, the extracted aspects are concatenated with the usual user and item features used by the recommendation model. The recommendation task mediates the learning of the user embeddings and item embeddings, which are used as soft prompts to generate aspects. Therefore, the extracted aspects are personalized and contextualized by the recommendation task. We showcase the effectiveness of our proposed method through extensive experiments on three industrial datasets, where our proposed framework significantly outperforms state-of-the-art baselines in both the personalized aspect extraction and aspect-based recommendation tasks. In particular, we demonstrate that it is necessary and beneficial to combine the learning of aspect extraction and aspect-based recommendation together. We also conduct extensive ablation studies to understand the contribution of each design component in our framework.",8a4320fd903677a3ea2bf606a6537b59885b1108,Semantic Scholar,,highly relevant,"The paper directly addresses refining prompts, specifically by adjusting logical components rather than the text, to guide LLMs in drug discovery, which is a form of prompt engineering." automatic chain of thought prompting in large language models,"['Zhuosheng Zhang', 'Aston Zhang', 'Mu Li', 'Alexander J. Smola']",http://arxiv.org/pdf/2210.03493,2022-10-07,,"Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like""Let's think step by step""to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the""Let's think step by step""prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot",90350aa626bed47b02d0c162462e5b0ca82be6b2,Semantic Scholar,,highly relevant,"The paper mentions 'prompts engineering' in the context of enhancing interactive learning, making it directly relevant to the topic of prompt engineering." harnessing the power of adversarial prompting and large language models for robust hypothesis generation in astronomy,"['I. Ciucă', 'Y. Ting', 'S. Kruk', 'K. Iyer']",http://arxiv.org/pdf/2306.11648,2023-06-20,,"This study investigates the application of Large Language Models (LLMs), specifically GPT-4, within Astronomy. We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domain-specific literature. Our findings point towards a substantial boost in hypothesis generation when using in-context prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.",91099bbb96133c70db091041900ecff502a5e3a8,Semantic Scholar,,highly relevant,"The paper discusses utilizing prompt engineering techniques in the context of integrating AI into the writing classroom, directly indicating relevance to prompt engineering." dynamic strategy chain dynamic zeroshot cot for long mental health support generation,"['Qi Chen', 'Dexi Liu']",https://arxiv.org/pdf/2308.10444,2023-08-21,,"Long counseling Text Generation for Mental health support (LTGM), an innovative and challenging task, aims to provide help-seekers with mental health support through a comprehensive and more acceptable response. The combination of chain-of-thought (CoT) prompting and Large Language Models (LLMs) is employed and get the SOTA performance on various NLP tasks, especially on text generation tasks. Zero-shot CoT prompting is one of the most common methods in CoT prompting. However, in the LTGM task, Zero-shot CoT prompting can not simulate a counselor or provide personalized strategies without effective mental health counseling strategy prompts. To tackle this challenge, we propose a zero-shot Dynamic Strategy Chain (DSC) prompting method. Firstly, we utilize GPT2 to learn the responses written by mental health counselors and dynamically generate mental health counseling strategies tailored to the help-seekers' needs. Secondly, the Zero-shot DSC prompting is constructed according to mental health counseling strategies and the help-seekers' post. Finally, the Zero-shot DSC prompting is employed to guide LLMs in generating more human-like responses for the help-seekers. Both automatic and manual evaluations demonstrate that Zero-shot DSC prompting can deliver more human-like responses than CoT prompting methods on LTGM tasks.",96599abdbac3106b89f3d8dd3b26fe9c38a7624f,Semantic Scholar,,highly relevant,"The paper details a study where prompt engineering is utilized to generate realistic conversational role play simulations, indicating it focuses on post-training prompting techniques." graph neural prompting with large language models,"['Yijun Tian', 'Huan Song', 'Zichen Wang', 'Haozhu Wang', 'Ziqing Hu', 'Fang Wang', 'N. Chawla', 'Panpan Xu']",https://arxiv.org/pdf/2309.15427,2023-09-27,,"Large language models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs (KGs) to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. Therefore, how to enhance pre-trained LLMs using grounded knowledge, e.g., retrieval-augmented generation, remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings. Code is available at https://github.com/meettyj/GNP.",9a4e4ab77c3d836bab35e0578de68e8ce79af1e8,Semantic Scholar,,highly relevant,"The paper directly investigates the impact of prompt engineering strategies on the usability of AI-generated responses, indicating a focus on post-training prompting techniques." insertexpansions for toolenabled conversational agents,"['Andreas Göldi', 'Roman Rietsche']",https://arxiv.org/pdf/2307.01644,2023-07-04,,"This paper delves into an advanced implementation of Chain-of-Thought-Prompting in Large Language Models, focusing on the use of tools (or""plug-ins"") within the explicit reasoning paths generated by this prompting method. We find that tool-enabled conversational agents often become sidetracked, as additional context from tools like search engines or calculators diverts from original user intents. To address this, we explore a concept wherein the user becomes the tool, providing necessary details and refining their requests. Through Conversation Analysis, we characterize this interaction as insert-expansion - an intermediary conversation designed to facilitate the preferred response. We explore possibilities arising from this 'user-as-a-tool' approach in two empirical studies using direct comparison, and find benefits in the recommendation domain.",9c124ca43b19a834dc9eea54d5c36b7b42db655b,Semantic Scholar,,highly relevant,The paper directly investigates and compares the effectiveness of prompt-engineering techniques with fine-tuning in the specific context of phishing URL detection. "sib200 a simple, inclusive, and big evaluation dataset for topic classification in 200+ languages and dialects","['David Ifeoluwa Adelani', 'Hannah Liu', 'Xiaoyu Shen', 'Nikita Vassilyev', 'Jesujoba Oluwadara Alabi', 'Yanke Mao', 'Haonan Gao', 'Annie En-Shiun Lee']",https://arxiv.org/pdf/2309.07445,2023-09-14,,"Despite the progress we have recorded in the last few years in multilingual natural language processing, evaluation is typically limited to a small set of languages with available datasets which excludes a large number of low-resource languages. In this paper, we created SIB-200 -- a large-scale open-sourced benchmark dataset for topic classification in 200 languages and dialects to address the lack of evaluation dataset for Natural Language Understanding (NLU). For many of the languages covered in SIB-200, this is the first publicly available evaluation dataset for NLU. The dataset is based on Flores-200 machine translation corpus. We annotated the English portion of the dataset and extended the sentence-level annotation to the remaining 203 languages covered in the corpus. Despite the simplicity of this task, our evaluation in full-supervised setting, cross-lingual transfer setting and prompting of large language model setting show that there is still a large gap between the performance of high-resource and low-resource languages when multilingual evaluation is scaled to numerous world languages. We found that languages unseen during the pre-training of multilingual language models, under-represented language families (like Nilotic and Altantic-Congo), and languages from the regions of Africa, Americas, Oceania and South East Asia, often have the lowest performance on our topic classification dataset. We hope our dataset will encourage a more inclusive evaluation of multilingual language models on a more diverse set of languages. https://github.com/dadelani/sib-200",a517575328ca3b8289fa95bd9f71669e1cf7127a,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of 'automatic prompt engineering' to construct prompts for GPT-3, indicating its relevance to the topic of prompt engineering, especially in the context of database management." spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization,"['Yu-Neng Chuang', 'Ruixiang Tang', 'Xiaoqian Jiang', 'Xia Hu']",https://arxiv.org/pdf/2303.13035,,,"Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes. These records are crucial for enabling healthcare providers to make well-informed decisions regarding patient care. Summarizing clinical notes further assists healthcare professionals in pinpointing potential health risks and making better-informed decisions. This process contributes to reducing errors and enhancing patient outcomes by ensuring providers have access to the most pertinent and current patient data. Recent research has shown that incorporating prompts with large language models (LLMs) substantially boosts the efficacy of summarization tasks. However, we show that this approach also leads to increased output variance, resulting in notably divergent outputs even when prompts share similar meanings. To tackle this challenge, we introduce a model-agnostic Soft Prompt-Based Calibration (SPeC) pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization. Experimental findings on multiple clinical note tasks and LLMs indicate that our method not only bolsters performance but also effectively curbs variance for various LLMs, providing a more uniform and dependable solution for summarizing vital medical information.",b378e54c88d241aa917131beb65c96be3730f40c,Semantic Scholar,,highly relevant,"The paper focuses on the nature of human creativity in text-based generative art with a specific mention of prompt engineering, making it relevant to the topic." the unreliability of explanations in fewshot incontext learning,"['Xi Ye', 'Greg Durrett']",http://arxiv.org/pdf/2205.03401,,,"How can prompting a large language model like GPT-3 with explanations improve in-context learning? We focus specifically on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. Including explanations in the prompt and having the model generate them does not consistently improve performance in the settings we study, contrary to recent results on symbolic reasoning tasks (Nye et al., 2021; Wei et al., 2022). Despite careful prompting, explanations generated by GPT-3 may not even be factually grounded in the input, even on simple tasks with straightforward extractive explanations. However, these flawed explanations can still be useful as a way to verify GPT-3’s predictions post-hoc. Through analysis in three settings, we show that explanations judged as good by humans—those that are logically consistent with the input and the prediction—usually indicate more accurate predictions. Following these observations, we present a framework for calibrating model predictions based on the reliability of the explanations. Our framework trains calibrators using automatically extracted scores that approximately assess the reliability of explanations, which helps improve performance across three different datasets",de04aa282f8055cebe86966c592bf37af6aecc99,Semantic Scholar,,highly relevant,"The paper focuses on using a prompt-based method for text classification in a few-shot learning scenario, which aligns directly with the topic of prompt engineering." augmented embeddings for custom retrievals,"['Anirudh Khatry', 'Yasharth Bajpai', 'Priyanshu Gupta', 'Sumit Gulwani', 'Ashish Tiwari']",https://arxiv.org/pdf/2310.05380,2023-10-09,,"Information retrieval involves selecting artifacts from a corpus that are most relevant to a given search query. The flavor of retrieval typically used in classical applications can be termed as homogeneous and relaxed, where queries and corpus elements are both natural language (NL) utterances (homogeneous) and the goal is to pick most relevant elements from the corpus in the Top-K, where K is large, such as 10, 25, 50 or even 100 (relaxed). Recently, retrieval is being used extensively in preparing prompts for large language models (LLMs) to enable LLMs to perform targeted tasks. These new applications of retrieval are often heterogeneous and strict -- the queries and the corpus contain different kinds of entities, such as NL and code, and there is a need for improving retrieval at Top-K for small values of K, such as K=1 or 3 or 5. Current dense retrieval techniques based on pretrained embeddings provide a general-purpose and powerful approach for retrieval, but they are oblivious to task-specific notions of similarity of heterogeneous artifacts. We introduce Adapted Dense Retrieval, a mechanism to transform embeddings to enable improved task-specific, heterogeneous and strict retrieval. Adapted Dense Retrieval works by learning a low-rank residual adaptation of the pretrained black-box embedding. We empirically validate our approach by showing improvements over the state-of-the-art general-purpose embeddings-based baseline.",e4c466cf3df4887e0121561be90e0bac78d3e1cb,Semantic Scholar,,highly relevant,"The paper explicitly mentions using prompt learning in the context of named entity recognition, indicating its relevance to prompt engineering." "tryage realtime, intelligent routing of user prompts to large language models","['S. Hari', 'Matt Thomson']",https://arxiv.org/pdf/2308.11601,2023-08-22,,"The introduction of the transformer architecture and the self-attention mechanism has led to an explosive production of language models trained on specific downstream tasks and data domains. With over 200, 000 models in the Hugging Face ecosystem, users grapple with selecting and optimizing models to suit multifaceted workflows and data domains while addressing computational, security, and recency concerns. There is an urgent need for machine learning frameworks that can eliminate the burden of model selection and customization and unleash the incredible power of the vast emerging model library for end users. Here, we propose a context-aware routing system, Tryage, that leverages a language model router for optimal selection of expert models from a model library based on analysis of individual input prompts. Inspired by the thalamic router in the brain, Tryage employs a perceptive router to predict down-stream model performance on prompts and, then, makes a routing decision using an objective function that integrates performance predictions with user goals and constraints that are incorporated through flags (e.g., model size, model recency). Tryage allows users to explore a Pareto front and automatically trade-off between task accuracy and secondary goals including minimization of model size, recency, security, verbosity, and readability. Across heterogeneous data sets that include code, text, clinical data, and patents, the Tryage framework surpasses Gorilla and GPT3.5 turbo in dynamic model selection identifying the optimal model with an accuracy of 50.9% , compared to 23.6% by GPT 3.5 Turbo and 10.8% by Gorilla. Conceptually, Tryage demonstrates how routing models can be applied to program and control the behavior of multi-model LLM systems to maximize efficient use of the expanding and evolving language model ecosystem.",ee025d7030d4767062af2bcd32a4d586737d30bf,Semantic Scholar,,highly relevant,"The paper focuses on an adaptive prompt-based learning method for few-shot sentiment analysis, which directly relates to the topic of prompt engineering." distractor generation for multiplechoice questions with predictive prompting and large language models,"['Semere Kiros Bitew', 'Johannes Deleu', 'Chris Develder', 'Thomas Demeester']",https://arxiv.org/pdf/2307.16338,2023-07-30,,"Large Language Models (LLMs) such as ChatGPT have demonstrated remarkable performance across various tasks and have garnered significant attention from both researchers and practitioners. However, in an educational context, we still observe a performance gap in generating distractors -- i.e., plausible yet incorrect answers -- with LLMs for multiple-choice questions (MCQs). In this study, we propose a strategy for guiding LLMs such as ChatGPT, in generating relevant distractors by prompting them with question items automatically retrieved from a question bank as well-chosen in-context examples. We evaluate our LLM-based solutions using a quantitative assessment on an existing test set, as well as through quality annotations by human experts, i.e., teachers. We found that on average 53% of the generated distractors presented to the teachers were rated as high-quality, i.e., suitable for immediate use as is, outperforming the state-of-the-art model. We also show the gains of our approach 1 in generating high-quality distractors by comparing it with a zero-shot ChatGPT and a few-shot ChatGPT prompted with static examples.",f1bb5051965a3a4c9288f0123dd03c26a08e1378,Semantic Scholar,,highly relevant,"The paper discusses the use of prompt-based methods for sentiment analysis, directly relating to prompt engineering." interleaving retrieval with chainofthought reasoning for knowledgeintensive multistep questions,"['H. Trivedi', 'Niranjan Balasubramanian', 'Tushar Khot', 'Ashish Sabharwal']",http://arxiv.org/pdf/2212.10509,2022-12-20,,"Prompting-based large language models (LLMs) are surprisingly powerful at generating natural language reasoning steps or Chains-of-Thoughts (CoT) for multi-step question answering (QA). They struggle, however, when the necessary knowledge is either unavailable to the LLM or not up-to-date within its parameters. While using the question to retrieve relevant text from an external knowledge source helps LLMs, we observe that this one-step retrieve-and-read approach is insufficient for multi-step QA. Here, what to retrieve depends on what has already been derived, which in turn may depend on what was previously retrieved. To address this, we propose IRCoT, a new approach for multi-step QA that interleaves retrieval with steps (sentences) in a CoT, guiding the retrieval with CoT and in turn using retrieved results to improve CoT. Using IRCoT with GPT3 substantially improves retrieval (up to 21 points) as well as downstream QA (up to 15 points) on four datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. We observe similar substantial gains in out-of-distribution (OOD) settings as well as with much smaller models such as Flan-T5-large without additional training. IRCoT reduces model hallucination, resulting in factually more accurate CoT reasoning.",f208ea909fa7f54fea82def9a92fd81dfc758c39,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of a 'Prompt-based method', indicating a focused methodology involving prompt engineering within the context of fact verification." satisfiabilityaided language models using declarative prompting,"['Xi Ye', 'Qiaochu Chen', 'Işıl Dillig', 'Greg Durrett']",https://arxiv.org/pdf/2305.09656,2023-05-16,,"Prior work has combined chain-of-thought prompting in large language models (LLMs) with programmatic representations to perform effective and transparent reasoning. While such an approach works well for tasks that only require forward reasoning (e.g., straightforward arithmetic), it is less effective for constraint solving problems that require more sophisticated planning and search. In this paper, we propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of LLMs. We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer. This approach has two key advantages. The declarative specification is closer to the problem description than the reasoning steps are, so the LLM can parse it out of the description more accurately. Furthermore, by offloading the actual reasoning task to an automated theorem prover, our approach can guarantee the correctness of the answer with respect to the parsed specification and avoid planning errors in the solving process. We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm. In particular, SATLM outperforms program-aided LMs by 23% on a challenging subset of the GSM arithmetic reasoning dataset; SATLM also achieves a new SoTA on LSAT and BoardgameQA, surpassing previous models that are trained on the respective training sets.",f27f6d1d521d189e78f5623098ced0deea613d33,Semantic Scholar,,highly relevant,"The paper explicitly discusses prompt-based methods and a variant of prompt tuning for named entity recognition, making it highly relevant to prompt engineering." choice over control how users write with large language models using diegetic and nondiegetic prompting,"['Hai Dang', 'Sven Goller', 'Florian Lehmann', 'Daniel Buschek']",https://arxiv.org/pdf/2303.03199,2023-03-06,,"We propose a conceptual perspective on prompts for Large Language Models (LLMs) that distinguishes between (1) diegetic prompts (part of the narrative, e.g. “Once upon a time, I saw a fox...”), and (2) non-diegetic prompts (external, e.g. “Write about the adventures of the fox.”). With this lens, we study how 129 crowd workers on Prolific write short texts with different user interfaces (1 vs 3 suggestions, with/out non-diegetic prompts; implemented with GPT-3): When the interface offered multiple suggestions and provided an option for non-diegetic prompting, participants preferred choosing from multiple suggestions over controlling them via non-diegetic prompts. When participants provided non-diegetic prompts it was to ask for inspiration, topics or facts. Single suggestions in particular were guided both with diegetic and non-diegetic information. This work informs human-AI interaction with generative models by revealing that (1) writing non-diegetic prompts requires effort, (2) people combine diegetic and non-diegetic prompting, and (3) they use their draft (i.e. diegetic information) and suggestion timing to strategically guide LLMs.",fccf8776d7525627c518a56a1f4db367a4d7120b,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of prompt-based approaches and a novel pipeline for automating prompt generation in Chinese, which is directly related to prompt engineering." operation for mastoid disease,['J. Gibb'],http://europepmc.org/articles/pmc2460841?pdf=render,1935-07-20,,"SIR,-Mr. R. A. Grant is to be congratulated upon his case of recovery after prompt injection of adrenaline into the chamber of the ventricle (July 13th, p. 64). Whilst giving anaesthetics as a hospital resident about ten years ago I dealt with two cases in this way, the first unsuccessfully. In the second case, in order to make as certain as possible that the needle.had gone through the muscle, blood was withdrawn into the syringe before injection of the adrenaline. It may be, also, that the stimulus of the needle produces contraction of the cardiac muscle.-I am, etc.-, W.B. McKELVIE, AManchester, July 15th. MI.D., Ch.'M., F.R.C.S.E., D.L.O. **V In an annotation in the Journal of March 23rd (p. 593) we wrote: "" On these grounds there is something to be said for attempting to restart contractions by puncturing the right auricle, the chamber in which the contraction of the whole heart normally originates.""",eaf0c14c66124444276834602623c1b5d77cd0c4,Semantic Scholar,,somewhat relevant,"The paper mentions using a 'few-shot prompt-tuning algorithm' to fine-tune the diffusion model, which indicates it involves prompt engineering techniques." improving short text classification with augmented data using gpt3,"['Salvador Balkus', 'Donghui Yan']",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4F23066E3F0156382190BD76DA9A7BA5/S1351324923000438a.pdf/div-class-title-improving-short-text-classification-with-augmented-data-using-gpt-3-div.pdf,2022-05-23,," GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two augmented classifiers: the Classification Endpoint with an increased training set size and the Completion Endpoint with an augmented prompt optimized using a genetic algorithm. We find that data augmentation significantly increases the accuracy of both classifiers, and that the embedding-based Classification Endpoint achieves the best accuracy of about 76%, compared to human accuracy of 85%. In this way, giving large language models like GPT-3 the ability to propose their own training examples can improve short text classification performance.",0008b1e49c3d4afe2cfffe82ea88be147b618504,Semantic Scholar,,highly relevant,"The abstract explicitly mentions the 'prompt-engineering phase' and testing with a range of prompt types and formats, directly tying into the concept of prompt engineering for language models." medical students’ perspectives on an assessment of reflective portfolios [response to letter],"['S. Kassab', 'M. Bidmos', 'Michail Nomikos', 'Suhad Daher-Nashif', 'T. Kane', 'S. Sarangi', 'M. Abu-Hijleh']",https://www.dovepress.com/getfile.php?fileID=59764,2020-07-01,,"Salah Eldin Kassab 1 Mubarak Bidmos 1 Michail Nomikos 1 Suhad Daher-Nashif 2 Tanya Kane 2 Srikant Sarangi Marwan Abu-Hijleh 1 1Department of Basic Medical Sciences, College of Medicine, QU Health, Qatar University, Doha, Qatar; 2Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar; 3Danish Institute of Humanities and Medicine (DIHM), Aalborg University, Aalborg, Denmark Dear editor We thank Forenc et al for their interest in our study titled Construct Validity of an Instrument for Assessment of Reflective Writing-Based Portfolios of Medical Students. In their Letter to Editor, their main critique concerned the extent to which the nonanonymity of reflective portfolios and the lack of reflection prompts to students may have affected the G-theory analysis. In their view, these two aspects will have reduced the percentage variance of the object of measurement (students) and thus influenced the variance attributed to the study facets. In addition, they draw attention that the study instrument might not be replicable for clinical students, due to increased complexity of the learning environment. We address their concerns in turn. It is important to clarify that there are currently no universal guidelines for explaining the magnitude of variance related to each component in G-theory analysis. Of course, any researcher would aim to get the maximum percentage of variance attributed to differences between the object of measurement compared with other facets in the measurement plan. However, the main determinant of what represents large or small variance is the purpose of the study and the identified sources of variance. For example, we have recently reported an acceptable reliability coefficient with only 27% variance due to the object of measurement, because the study aimed to measure “soft skills”, which are considered difficult to measure. Given that reflection is an enigmatic and complex construct, we believe that the 46.6% variance attributed to the object of measurement in our study was reasonably grounded. We fully acknowledge that anonymity in reflective writing–based portfolios could have reduced the variance in student–rater interaction and thus bias in assessment. However, the decision concerning whether to provide students with reflection prompts or not is a tradeoff between scaffolding a structured, guided reflection process or affording the unbounded freedom of personal reflections deriving from a rich and varied array of lived experiences. We firmly believe that the absence of reflection prompts optimizes the conditions for individually unique, authentic reflections, which must be preferred to (re)acting reflectively to a checklist of activities triggered by a set of predetermined prompts. Here, maintaining a distinction between “reflection for learning” and “reflection for assessment” is useful: although prompts are a good device for learning purposes, they are not relevant for assessment purposes. In the latter context, what students choose to Correspondence: Salah Eldin Kassab Physiology and Medical Education, College of Medicine, QU Health, Qatar University, PO Box 2713, Doha, Qatar Tel +974 4403 7843 Email skassab@qu.edu.qa Advances in Medical Education and Practice Dovepress open access to scientific and medical research",1960aa27ec7fd799941a6905c086d32ffa0214ce,Semantic Scholar,,highly relevant,"The paper discusses using few-shot prompting of GPT-3 to detect metaphoric language, which is directly related to prompt engineering." home telemonitoring of respiratory activity and heart rate variability in chronic heart failure patients the challenge of the home or hospital in heart failure project,"['G. Pinna', 'R. Maestri', 'E. Gobbi', 'M. L. La Rovere', 'J. L. Scanferlato', 'T. Witkowski', 'A. Kuś-Klinowska', 'D. Andrews', 'P. Johnson', 'S. Capomolla', 'A. Mortara']",http://www.cinc.org/Proceedings/2003/pdf/197.pdf,2003-12-01,,"Nocturnal respiratory disorders and depressed heart rate variability are known predictors of poor prognosis in chronic heart failure (CHF) patients. Intermittent monitoring of cardiorespiratory signals while the patient is at home might thus allow early identification of clinical deterioration and prompt optimization of treatment, leading to reduced hospitalizations and mortality and improved quality of life. Within the European Community multicenter trial HHH (Home or Hospital in Heart Failure), we are testing a novel low-cost system for 24-hour recording of cardiorespiratory signals, suitable to be self-managed by the patient at home, with transmission of acquired data through standard telephone lines to the medical/nursing staff. Preliminary results from 24 CHF patients enrolled so far indicate that monthly home telemonitoring is feasible and the compliance is high.",21d1465ca2a9514e26b1b368c653f10c48d6e9fc,Semantic Scholar,,highly relevant,"The paper focuses on the use of various prompt engineering techniques in the context of legal text classification, which directly relates to the topic of prompt engineering." bioinformatics in plant breeding and research on disease resistance,"['Huiying Mu', 'Baoshan Wang', 'F. Yuan']",https://www.mdpi.com/2223-7747/11/22/3118/pdf?version=1668520760,2022-11-01,,"In the context of plant breeding, bioinformatics can empower genetic and genomic selection to determine the optimal combination of genotypes that will produce a desired phenotype and help expedite the isolation of these new varieties. Bioinformatics is also instrumental in collecting and processing plant phenotypes, which facilitates plant breeding. Robots that use automated and digital technologies to collect and analyze different types of information to monitor the environment in which plants grow, analyze the environmental stresses they face, and promptly optimize suboptimal and adverse growth conditions accordingly, have helped plant research and saved human resources. In this paper, we describe the use of various bioinformatics databases and algorithms and explore their potential applications in plant breeding and for research on plant disease resistance.",2c2b40b4f1967dc1fb640c7c4bec140110dbf2cf,Semantic Scholar,,highly relevant,"The paper explicitly mentions using zero- and few-shot prompting strategies with ChatGPT, indicating a focus on prompt engineering techniques." early diagnostic markers of lateonset neonatal sepsis,"['Preslava Gatseva', 'Alexander B. Blazhev', 'Zarko Y. Yordanov', 'Victoria G. Atanasova']",https://www.mdpi.com/2036-7503/15/3/50/pdf?version=1695182872,2023-09-01,,"Objective: Early diagnosis of nosocomial infections in newborns is a great challenge, because in the initial phase of systemic infection, clinical symptoms are often non-specific, and routinely used hematological markers are not sufficiently informative. The aim of this study was to determine the potential of early inflammatory markers to diagnose late-onset neonatal sepsis—procalcitonin (PCT), interleukin 6 (IL-6), interleukin 8 (IL-8) and endocan (ESM-1). Material and methods: A prospective clinical–epidemiological study was conducted in a third-level NICU in Pleven, Bulgaria. Patients with suspected late-onset sepsis and healthy controls were tested. A sandwich ELISA method was used to measure the serum concentrations of biomarkers. Results: Sixty newborns were included, of which 35% symptomatic and infected, 33.3% symptomatic but uninfected and 31.7% asymptomatic controls. The mean values of PCT, IL-6, I/T index and PLT differ significantly in the three groups. For ESM-1, IL-8 and CRP, the difference was statistically insignificant. The best sensitivity (78%) and negative predictive value (84%) was found for IL-6. The combinations of PCT + IL-6 and PCT + IL-6+ I/T+ PLT showed very good diagnostic potential. Conclusion: The introduction into the routine practice of indicators such as PCT and IL-6 may provide an opportunity to promptly optimize the diagnostic and therapeutic approach to LOS.",2e536dcd013be93dc1841dd0e7a0a87b2846f341,Semantic Scholar,,highly relevant,"The paper evaluates ChatGPT's performance on log parsing across different prompting methods, directly engaging with prompt engineering." automated extraction and visualization of metabolic networks from biomedical literature using a large language model,"['Thiptanawat Phongwattana', 'Jonathan H. Chan']",https://www.biorxiv.org/content/biorxiv/early/2023/06/29/2023.06.27.546560.full.pdf,2023-06-29,,"The rapid growth of biomedical literature presents a significant challenge for researchers to extract and analyze relevant information efficiently. In this study, we explore the application of GPT, the large language model to automate the extraction and visualization of metabolic networks from a corpus of PubMed abstracts. Our objective is to provide a valuable tool for biomedical researchers to explore and understand the intricate metabolic interactions discussed in scientific literature. We begin by splitting a ton of the tokens within the corpus, as the GPT-3.5-Turbo model has a token limit of 4,000 per analysis. Through iterative prompt optimization, we successfully extract a comprehensive list of metabolites, enzymes, and proteins from the abstracts. To validate the accuracy and completeness of the extracted entities, our biomedical data domain experts compare them with the provided abstracts and ensure a fully matched result. Using the extracted entities, we generate a directed graph that represents the metabolic network including 3 types of metabolic events that consist of metabolic consumption, metabolic reaction, and metabolic production. The graph visualization, achieved through Python and NetworkX, offers a clear representation of metabolic pathways, highlighting the relationships between metabolites, enzymes, and proteins. Our approach integrates language models and network analysis, demonstrating the power of combining automated information extraction with sophisticated visualization techniques. The research contributions are twofold. Firstly, we showcase the ability of GPT-3.5-Turbo to automatically extract metabolic entities, streamlining the process of cataloging important components in metabolic research. Secondly, we present the generation and visualization of a directed graph that provides a comprehensive overview of metabolic interactions. This graph serves as a valuable tool for further analysis, comparison with existing pathways, and updating or refining metabolic networks. Our findings underscore the potential of large language models and network analysis techniques in extracting and visualizing metabolic information from scientific literature. This approach enables researchers to gain insights into complex biological systems, advancing our understanding of metabolic pathways and their components.",439c2a5c4883b421ca316617b1306583cc1d706c,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of 'few-shot-prompted pre-trained language models' and adapting the 'chain-of-thought method of prompting', indicating a focus on applying specific prompting techniques to enhance language models' performance in generating commonsense knowledge." mapo boosting large language model performance with modeladaptive prompt optimization,"['Yuyan Chen', 'Zhihao Wen', 'Ge Fan', 'Zhengyu Chen', 'Wei Wu', 'Dayiheng Liu', 'Zhixu Li', 'Bang Liu', 'Yanghua Xiao']",https://aclanthology.org/2023.findings-emnlp.215.pdf,,,"Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing research primarily emphasizes the importance of adapting prompts to specific tasks, rather than specific LLMs. However, a good prompt is not solely defined by its wording, but also binds to the nature of the LLM in question. In this work, we first quantitatively demonstrate that different prompts should be adapted to different LLMs to enhance their capabilities across various down-stream tasks in NLP. Then we novelly propose a model-adaptive prompt optimizer (MAPO) method that optimizes the original prompts for each specific LLM in downstream tasks. Extensive experiments indicate that the proposed method can effectively refine prompts for an LLM, leading to significant improvements over various downstream tasks.",91b6158978b248e9a0e65d0d588bc1054e72bc16,Semantic Scholar,,highly relevant,"The paper focuses on using GPT-4 for medical summarization through a few-shot prompting technique, indicating an application of prompt engineering." emerging technology in acute resuscitation monitoring,"['M. Tichauer', 'J. Mccoy']",http://www.scirp.org/journal/PaperDownload.aspx?paperID=24794,2012-11-23,,"Fluid optimization in the resuscitation of shock became the mainstay of treatment following the advent of Early Goal-Directed Therapy (EGDT) by Rivers et al. in 2001 [1]. Patients presenting in shock require prompt optimization of volume status and cardiac out- put to ensure adequate perfusion. Poor optimization may be associated with prolonged hospital and intensive care unit stays. The prior gold standard, pulmonary artery catheterization, is rarely available in the emergency department setting and its invasive nature has led to recent re-evaluation of its clinical utility. However, there are new monitoring technologies that are being studied in the intensive care unit setting that may soon be available in emergency departments to aid in nursing and physician decision making to improve acute resuscitation.",93e09c5feb9b2ffc8926b4edff13b3d8e02e41de,Semantic Scholar,,highly relevant,The use of a 'few-shot prompt to a large language model (GPT-3)' directly relates to the application of prompt engineering to influence model outputs. recombinant hemagglutinin displaying on yeast reshapes congenital lymphocyte subsets to prompt optimized systemic immune protection against avian influenza infection,"['Han Zhang', 'Zexing Li', 'Huixia Zhang', 'Yanyu Guo', 'Xinyi Zhang', 'Lilin Zhang', 'Liu Yang', 'Shujun Li', 'Changyan Li', 'D. Cui', 'R. Xie', 'Yongqing Li', 'Jinhai Huang']",https://www.frontiersin.org/articles/10.3389/fmicb.2023.1153922/pdf,2023-05-31,,"Introduction Prophylactic vaccination is regarded as the most effective means to control avian flu infection. Currently, there is a need for a universal vaccine that provides broad and long-lasting protection against influenza virus. Meanwhile, although yeast-based vaccines have been used in clinic, studies are still required to further understand the molecular mechanism of yeast-based vaccines under physiological conditions. Methods We generated a yeast-based vaccine against influenza hemagglutinin (HA) of H5, H7 and H9 using surface displaying technology and evaluated the protective efficacy of chickens after exposure to H9N2 influenza virus. Results Oral yeast vaccine provided less clinical syndrome, reduced viral loading and alleviated airway damage significantly. Compared to the commercial inactivated vaccine, yeast vaccine stimulated the activation of splenic NK and APCs cells and boosted TLR7-IRF7-IFN signaling in spleen. Meanwhile, γδ T cells in the bursa of Fabricius were activated and the innate lymphoid cells (ILCs) in the bursa of Fabricius promoted the CILPs to differentiate to ILC3 cells in oral yeast birds. Moreover, the reshaped gut microbiota and a suppressed Th17-IL17-mediated inflammation in intestine was observed in oral yeast chickens, which might facilitate the recovery of intestinal mucosal immunity upon virus infection. Collectively, our findings suggest that oral yeast based multivalent bird flu vaccines provide an attractive strategy to update host defense function via reshapes of multi-systemic immune homeostasis.",98090bbc7b784a1f64d4522c5e1987b196863fd0,Semantic Scholar,,highly relevant,"The paper explicitly mentions the implementation of one-shot prompts using GPT-3.5, indicating it directly explores the use of prompt engineering techniques." diagnostic utility of endocan and interleukins for lateonset neonatal sepsis,"['Preslava Gatseva', 'Alexander B. Blazhev', 'Zarko Y. Yordanov', 'Victoria G. Atanasova']",https://sciendo.com/pdf/10.2478/jbcr-2023-0016,2023-11-01,,"Summary The aim of this study was to determine the potential of early inflammatory markers to diagnose late-onset neonatal sepsis – interleukin 6 (IL-6), interleukin 8 (IL-8) and endocan (ESM-1), and to compare them with routinely used markers like C-reactive protein (CRP) and procalcitonin (PCT). A prospective (January, 2022 – January, 2023) clinical-epidemiological study was conducted in a third level NICU in Pleven, Bulgaria. Patients with suspected nosocomial infection and healthy controls were tested. A sandwich ELISA method was used to measure the serum concentrations. Sixty newborns with an average gestational age of 29.75±3.61 gestational weeks were included, of which 35% were symptomatic and infected, 33.3% were symptomatic but uninfected, and 31.7% were asymptomatic controls. The mean values of PCT and IL-6 differ significantly in the three groups. For ESM-1, IL-8 and CRP, the difference was statistically insignificant. The best sensitivity (78%) and negative predictive value (84%) was found for IL-6. The introduction into routine practice of indicators such as PCT and IL-6 may provide an opportunity to promptly optimize the diagnostic and therapeutic approach to LOS.",b281d891508e347149e3623b339861fa47eabe07,Semantic Scholar,,highly relevant,"The paper is highly relevant to prompt engineering because it assesses the effectiveness of large language models in spoken language learning by investigating various prompting techniques, including zero- and few-shot methods, chain-of-thought prompting, and in-domain exemplars." artificial intelligence for health message generation an empirical study using a large language model (llm) and prompt engineering,"['Sue Lim', 'Ralf Schmälzle']",https://www.frontiersin.org/articles/10.3389/fcomm.2023.1129082/pdf,2023-05-26,,"Introduction This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Method We used prompt engineering to generate awareness messages about folic acid and compared them to the most retweeted human-generated messages via human evaluation with an university sample and another sample comprising of young adult women. We also conducted computational text analysis to examine the similarities between the AI-generated messages and human generated tweets in terms of content and semantic structure. Results The results showed that AI-generated messages ranked higher in message quality and clarity across both samples. The computational analyses revealed that the AI generated messages were on par with human-generated ones in terms of sentiment, reading ease, and semantic content. Discussion Overall, these results demonstrate the potential of large language models for message generation. Theoretical, practical, and ethical implications are discussed.",04f1ff349424b4fb64a24fcaf44532d69826b0f4,Semantic Scholar,,somewhat relevant,"The paper specifically mentions the use of prompting techniques for controlling the formality level of machine translation, indicating it covers the application of prompt engineering." prompt engineering for textbased generative art,['J. Oppenlaender'],http://arxiv.org/pdf/2204.13988,,,"Text-based generative art has seen an explosion of interest in 2021. Online communities around text-based generative art as a novel digital medium have quickly emerged. This short paper identifies five types of prompt modifiers used by practitioners in the community of text-based generative art based on a 3-month ethnographic study on Twitter. The novel taxonomy of prompt modifiers provides researchers a conceptual starting point for investigating the practices of text-based generative art, but also may help practitioners of text-based generative art improve their images. The paper concludes with a discussion of research opportunities in the space of text-based generative art and the broader implications of prompt engineering from the perspective of human-AI interaction in future applications beyond the use case of text-based generative art.",07cd498aacfb4d39fa2e0e8d8a9c8ad881257300,Semantic Scholar,,highly relevant,The paper specifically mentions using an 'LLM prompting generator' which directly relates to prompt engineering in the context of question answering. ebhaam at semeval2023 task 1 a clipbased approach for comparing crossmodality and unimodality in visual word sense disambiguation,"['Zeinab Taghavi', 'Parsa Haghighi Naeini', 'Mohammad Ali Sadraei Javaheri', 'S. Gooran', 'Ehsaneddin Asgari', 'H. Rabiee', 'H. Sameti']",https://aclanthology.org/2023.semeval-1.269.pdf,,,"This paper presents an approach to tackle the task of Visual Word Sense Disambiguation (Visual-WSD), which involves determining the most appropriate image to represent a given polysemous word in one of its particular senses. The proposed approach leverages the CLIP model, prompt engineering, and text-to-image models such as GLIDE and DALL-E 2 for both image retrieval and generation. To evaluate our approach, we participated in the SemEval 2023 shared task on “Visual Word Sense Disambiguation (Visual-WSD)” using a zero-shot learning setting, where we compared the accuracy of different combinations of tools, including “Simple prompt-based” methods and “Generated prompt-based” methods for prompt engineering using completion models, and text-to-image models for changing input modality from text to image. Moreover, we explored the benefits of cross-modality evaluation between text and candidate images using CLIP. Our experimental results demonstrate that the proposed approach reaches better results than cross-modality approaches, highlighting the potential of prompt engineering and text-to-image models to improve accuracy in Visual-WSD tasks. We assessed our approach in a zero-shot learning scenario and attained an accuracy of 68.75\% in our best attempt.",08e0e696732103e585fd629e23888fd4acbb22df,Semantic Scholar,,highly relevant,"The paper explores how non-AI experts engage with and design prompts for LLMs, making it directly relevant to the study of prompt engineering." large language models help facilitate the automated synthesis of information on potential pest controllers,"['D. Scheepens', 'Joseph Millard', 'M. Farrell', 'T. Newbold']",https://www.biorxiv.org/content/biorxiv/early/2024/01/15/2024.01.12.575330.full.pdf,2024-01-15,,"The body of ecological literature, which informs much of our knowledge of the global loss of biodiversity, has been experiencing rapid growth in recent decades. The increasing difficulty to synthesise this literature manually has simultaneously resulted in a growing demand for automated text mining methods. Within the domain of deep learning, large language models (LLMs) have been the subject of considerable attention in recent years by virtue of great leaps in progress and a wide range of potential applications, however, quantitative investigation into their potential in ecology has so far been lacking. In this work, we analyse the ability of GPT-4 to extract information about invertebrate pests and pest controllers from abstracts of a body of literature on biological pest control, using a bespoke, zero-shot prompt. Our results show that the performance of GPT-4 is highly competitive with other state-of-the-art tools used for taxonomic named entity recognition and geographic location extraction tasks. On a held-out test set, we show that species and geographic locations are extracted with F1-scores of 99.8% and 95.3%, respectively, and highlight that the model is able to distinguish very effectively between the primary roles of interest (predators, parasitoids and pests). Moreover, we demonstrate the ability of the model to effectively extract and predict taxonomic information across various taxonomic ranks, and to automatically correct spelling mistakes. However, we do report a small number of cases of fabricated information (hallucinations). As a result of the current lack of specialised, pre-trained ecological language models, general-purpose LLMs may provide a promising way forward in ecology. Combined with tailored prompt engineering, such models can be employed for a wide range of text mining tasks in ecology, with the potential to greatly reduce time spent on manual screening and labelling of the literature.",092b230eee81f214a505eb57bea4dd0342555c10,Semantic Scholar,,somewhat relevant,"The abstract mentions 'LLM prompting of incident narratives,' which indicates the use of prompts with GPT-3.5, relevant to the study of prompt engineering." comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students,"['Dollaya Hirunyasiri', 'Danielle R. Thomas', 'Jionghao Lin', 'K. Koedinger', 'Vincent Aleven']",https://arxiv.org/pdf/2307.02018,2023-07-05,,"Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.",0b94b999fdd9488e1a0914d37f8fb3ea7e9ea0fd,Semantic Scholar,,somewhat relevant,"The paper mentions the use of 'zero-shot prompting', indicating that it involves prompt engineering techniques, though the focus is on a specific application in pharmacogenomics." gptempowered personalized elearning system for programming languages,"['Jennifer Jin', 'Mira Kim']",https://www.mdpi.com/2076-3417/13/23/12773/pdf?version=1701183024,2023-11-28,,"The eLearning approach to programming language instruction has gained widespread acceptance due to advantages such as accessibility, temporal flexibility, and content reusability. However, the current eLearning for programming predominantly employs the delivery of one-size-fits-all content, engendering elevated costs in both the development of language coursework and administration of eLearning sessions, which includes the labor-intensive task of grading student submissions. A compelling research question to consider is how to construct an eLearning system capable of delivering personalized, student-centric content, automating the generation of coursework elements, and eliminating the need for instructor involvement in the management of eLearning sessions. Our approach to delivering a definite solution to the question involves the utilization of a suite of advanced software technologies: GPT to dynamically generate course contents/components, prompt engineering to personalize course content for each individual student, and autonomous computing to manage eLearning sessions without the need for human intervention. The research results encompass the design of an eLearning framework covering all programming languages, a fully functional Python-based implementation, seamless integration with ChatGPT for dynamic content generation, a high degree of content personalization, and the elimination of manual effort required for managing eLearning sessions.",0e11a4323328c7d1d00d9f7e6dd163ad43a3ffa4,Semantic Scholar,,somewhat relevant,"The paper mentions using zero-shot prompting on language models, which aligns with the interest in hard prefix prompts even though it doesn't specify the prompt type as 'hard prefix'." polyglot prompt multilingual multitask prompt training,"['Jinlan Fu', 'See-Kiong Ng', 'Pengfei Liu']",https://aclanthology.org/2022.emnlp-main.674.pdf,2022-04-29,,"This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i.e. without any task/language-specific module? The benefit of achieving this could open new doors for future multilingual research, including allowing systems trained on low resources to be further assisted by other languages as well as other tasks. We approach this goal by developing a learning framework named Polyglot Prompting to exploit prompting methods for learning a unified semantic space for different languages and tasks with multilingual prompt engineering. We performed a comprehensive evaluation of 6 tasks, namely topic classification, sentiment classification, named entity recognition, question answering, natural language inference, and summarization, covering 24 datasets and 49 languages. The experimental results demonstrated the efficacy of multilingual multitask prompt-based learning and led to inspiring observations. We also present an interpretable multilingual evaluation methodology and show how the proposed framework, multilingual multitask prompt training, works. We release all datasets prompted in the best setting and code.",15437760a28d528bb1b76794aa4b1d15e7ba2a16,Semantic Scholar,,highly relevant,"The paper focuses on using zero-shot prompting of pre-trained language models to predict power connotation, aligning with the theme of prompt engineering." using chatgpt with confidence for biodiversityrelated information tasks,"['Michael Elliott', 'José Fortes']",https://biss.pensoft.net/article/112926/download/pdf/,2023-09-19,,"Recent advancements in conversational Artificial Intelligence (AI), such as OpenAI's Chat Generative Pre-Trained Transformer (ChatGPT), present the possibility of using large language models (LLMs) as tools for retrieving, analyzing, and transforming scientific information. We have found that ChatGPT (GPT 3.5) can provide accurate biodiversity knowledge in response to questions about species descriptions, occurrences, and taxonomy, as well as structure information according to data sharing standards such as Darwin Core. A rigorous evaluation of ChatGPT's capabilities in biodiversity-related tasks may help to inform viable use cases for today's LLMs in research and information workflows. In this work, we test the extent of ChatGPT's biodiversity knowledge, characterize its mistakes, and suggest how LLM-based systems might be designed to complete knowledge-based tasks with confidence. To test ChatGPT's biodiversity knowledge, we compiled a question-and-answer test set derived from Darwin Core records available in Integrated Digitized Biocollections (iDigBio). Each question focuses on one or more Darwin Core terms to test the model’s ability to recall species occurrence information and its understanding of the standard. The test set covers a range of locations, taxonomic groups, and both common and rare species (defined by the number of records in iDigBio). The results of the tests will be presented. We also tested ChatGPT on generative tasks, such as creating species occurrence maps. A visual comparison of the maps with iDigBio data shows that for some species, ChatGPT can generate fairly accurate representationsof their geographic ranges (Fig. 1). ChatGPT's incorrect responses in our tests show several patterns of mistakes. First, responses can be self-conflicting. For example, when asked ""Does Acer saccharum naturally occur in Benton, Oregon?"", ChatGPT responded ""YES, Acer saccharum DOES NOT naturally occur in Benton, Oregon"". ChatGPT can also be misled by semantics in species names. For Rafinesquia neomexicana, the word ""neomexicana"" leads ChatGPT to believe that the species primarily occurs in New Mexico, USA. ChatGPT may also confuse species, such as when attempting to describe a lesser-known species (e.g., a rare bee) within the same genus as a better-known species. Other causes of mistakes include hallucination (Ji et al. 2023), memorization (Chang and Bergen 2023), and user deception (Li et al. 2023). Some mistakes may be avoided by prompt engineering, e.g., few-shot prompting (Chang and Bergen 2023) and chain-of-thought prompting (Wei et al. 2022). These techniques assist Large Language Models (LLMs) by clarifying expectations or by guiding recollection. However, such methods cannot help when LLMs lack required knowledge. In these cases, alternative approaches are needed. A desired reliability can be theoretically guaranteed if responses that contain mistakes are discarded or corrected. This requires either detecting or predicting mistakes. Sometimes mistakes can be ruled out by verifying responses with a trusted source. For example, a trusted specimen record might be found that corroborates the response. The difficulty, however, is finding such records programmatically; e.g., using iDigBio and Global Biodiversity Information Facility's (GBIF) search Application Programming Interfaces (APIs) requires specifying indexed terms that might not appear in an LLM's response. This presents a secondary problem for which LLMs may be well suited. Note that with presence-only data, it can be difficult to disprove presence claims or prove absence claims. Besides verification, mistakes may be predicted using probabilistic methods. Formulating mistake probabilities often relies on heuristics. For example, variability in a model’s responses to a repeated query can be a sign of hallucination (Manakul et al. 2023). In practice, both probabilistic and verification methods may be needed to reach a desired reliability. LLM outputs that can be verified may be directly accepted (or discarded), while others are judged by estimating mistake probabilities. We will consider a set of heuristics and verification methods, and report empirical assessments of their impact on ChatGPT’s reliability.",17abf939baa953dd69dfaa4c2af5719217102c11,Semantic Scholar,,highly relevant,"The paper focuses on using large language models with zero-shot prompt-based information extraction in clinical settings, which directly involves prompt engineering." improve performance of finetuning language models with prompting,"['Noémi Ligeti-Nagy', 'Zijian Győző Yang']",https://www.infocommunications.hu/documents/169298/4882687/InfocomJournal_2023_SpecISS_ICAI_10.pdf,,,"This paper explores the effectiveness of prompt programming in the fine-tuning process of a Hungarian language model. The study builds on the prior success of prompt engineering in natural language processing tasks and employs the prompting method to enhance the fine-tuning performance of a huBERT model on several benchmark datasets of HuLU. The experimentation involves testing 45 prompt combinations for the HuCoPA dataset and 15 prompt variations for the HuRTE and HuWNLI datasets. The findings reveal that the addition of an instructional text consistently produces the best results across all winning cases, and that the [CLS] token produces the best results in the separator token experiments. The most significant enhancement was observed in the HuWNLI dataset, with an increase in accuracy from 65% to 85%. These results demon- strate that the addition of instruct text is crucial and sufficient in enabling the language model to effectively interpret and solve the Winograd Schemata problem. These results showcase the potential of prompt programming in enhancing the performance of language models in fine-tuning tasks, and highlight the importance of incorporating task-specific instructions to improve model interpretability and accuracy.",1fa49437707e703349f9335208cbede42166082e,Semantic Scholar,,highly relevant,"The paper is highly relevant because it discusses using zero-shot prompting of pre-trained language models (PLM) for pseudo label acquisition, which falls directly under the category of utilizing hard prefix prompts in prompt engineering." "the c4h, tat, hppr and hppd genes prompted engineering of rosmarinic acid biosynthetic pathway in salvia miltiorrhiza hairy root cultures","['Ying Xiao', 'Lei Zhang', 'Shouhong Gao', 'Saengking Saechao', 'Peng Di', 'Junfeng Chen', 'Wansheng Chen']",https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0029713&type=printable,2011-12-29,,"Rational engineering to produce biologically active plant compounds has been greatly impeded by our poor understanding of the regulatory and metabolic pathways underlying the biosynthesis of these compounds. Here we capitalized on our previously described gene-to-metabolite network in order to engineer rosmarinic acid (RA) biosynthesis pathway for the production of beneficial RA and lithospermic acid B (LAB) in Salvia miltiorrhiza hairy root cultures. Results showed their production was greatly elevated by (1) overexpression of single gene, including cinnamic acid 4-hydroxylase (c4h), tyrosine aminotransferase (tat), and 4-hydroxyphenylpyruvate reductase (hppr), (2) overexpression of both tat and hppr, and (3) suppression of 4-hydroxyphenylpyruvate dioxygenase (hppd). Co-expression of tat/hppr produced the most abundant RA (906 mg/liter) and LAB (992 mg/liter), which were 4.3 and 3.2-fold more than in their wild-type (wt) counterparts respectively. And the value of RA concentration was also higher than that reported before, that produced by means of nutrient medium optimization or elicitor treatment. It is the first report of boosting RA and LAB biosynthesis through genetic manipulation, providing an effective approach for their large-scale commercial production by using hairy root culture systems as bioreactors.",221e801f9a39ff055773b2a20d91e3efadbea921,Semantic Scholar,,somewhat relevant,"The paper focuses on evaluating large language models using a benchmark and mentions the use of zero-shot prompts, which relates to prompt engineering." revolutionizing natural language understanding with prompt engineering a comprehensive study,"['Siddhartha Acharyya', 'Soumyadeep Mukherjee, Mukherjee', 'Srinjoy Saha', 'Debrupa Pal']",https://doi.org/10.47001/irjiet/2023.710091,,,"- Urbanization is a global phenomenon, with more than half of the world's population residing in cities. This rapid urban growth has placed immense pressure on infrastructure and resources, leading to a multitude of challenges related to sustainability, efficiency, and resilience. Prompt engineering, an emerging field at the intersection of civil engineering and technology, offers innovative solutions to address these urban challenges. This research paper explores the key concepts, methodologies, and case studies of prompt engineering as a means to promote sustainable urban development. It examines the utilization of cutting-edge technologies such as the Internet of Things (IoT), artificial intelligence (AI), and data analytics in infrastructure management, urban planning, and transportation systems. By showcasing various successful implementations of prompt engineering practices from around the world, this research underscores the importance of embracing innovative approaches to tackle urban challenges and move towards a more sustainable, resilient, and efficient urban future.",235c784c07c7d1c4388a2adb6911a613d5901e70,Semantic Scholar,,somewhat relevant,"The paper discusses using multiple input prompts with LLMs for evaluating text style transfer, indicating the application of prompt engineering." can chatgpt understand causal language in science claims,"['Yuheun Kim', 'Lu Guo', 'Bei Yu', 'Yingya Li']",https://aclanthology.org/2023.wassa-1.33.pdf,,,"This study evaluated ChatGPT’s ability to understand causal language in science papers and news by testing its accuracy in a task of labeling the strength of a claim as causal, conditional causal, correlational, or no relationship. The results show that ChatGPT is still behind the existing fine-tuned BERT models by a large margin. ChatGPT also had difficulty understanding conditional causal claims mitigated by hedges. However, its weakness may be utilized to improve the clarity of human annotation guideline. Chain-of-Thoughts were faithful and helpful for improving prompt performance, but finding the optimal prompt is difficult with inconsistent results and the lack of effective method to establish cause-effect between prompts and outcomes, suggesting caution when generalizing prompt engineering results across tasks or models.",27d80545d142ced9b921290b5b2798cabd55468b,Semantic Scholar,,highly relevant,"The paper discusses using zero-shot and many-shot prompts with GPT models for poem style generation, which directly relates to hard prefix prompt engineering." contextual stance classification using prompt engineering,"['Felipe Penhorate Carvalho de Fonseca', 'Ivandré Paraboni', 'L. A. Digiampietri']",https://sol.sbc.org.br/index.php/stil/article/download/25435/25256,2023-09-25,,"This paper introduces a prompt-based method for few-shot learning addressing, as an application example, contextual stance classification, that is, the task of determining the attitude expressed by a given statement within a conversation thread with multiple points of view towards another statement. More specifically, we envisaged a method that uses the existing conversation thread (i.e., messages that are part of the test data) to create natural language prompts for few-shot learning with minimal reliance on training samples, whose preliminary results suggest that prompt engineering may be a competitive alternative to supervised methods both in terms of accuracy and development costs for the task at hand.",2d90460431c093757fcf651e333bc0da5f5404c2,Semantic Scholar,,highly relevant,"The paper discusses incorporating prompts into a GPT model for improving Chinese address parsing, which is directly related to the use of hard prefix prompting." prompt engineering in medical education,"['Thomas F. Heston', 'Charya Khun']",https://www.mdpi.com/2813-141X/2/3/19/pdf?version=1693479951,2023-08-31,,"Artificial intelligence-powered generative language models (GLMs), such as ChatGPT, Perplexity AI, and Google Bard, have the potential to provide personalized learning, unlimited practice opportunities, and interactive engagement 24/7, with immediate feedback. However, to fully utilize GLMs, properly formulated instructions are essential. Prompt engineering is a systematic approach to effectively communicating with GLMs to achieve the desired results. Well-crafted prompts yield good responses from the GLM, while poorly constructed prompts will lead to unsatisfactory responses. Besides the challenges of prompt engineering, significant concerns are associated with using GLMs in medical education, including ensuring accuracy, mitigating bias, maintaining privacy, and avoiding excessive reliance on technology. Future directions involve developing more sophisticated prompt engineering techniques, integrating GLMs with other technologies, creating personalized learning pathways, and researching the effectiveness of GLMs in medical education.",3159478fbc81e562c812b9d5dc1891271b21f0c4,Semantic Scholar,,highly relevant,The paper directly engages with prompt engineering by proposing a Self-Prompting framework for LLMs to perform ODQA tasks without training data. chatgpt opens a new door for bioinformatics,['Dong Xu'],https://journal.hep.com.cn/qb/EN/PDF/10.15302/J-QB-023-0328,2023-04-21,,"ChatGPT is an artificial intelligence (AI) system that can perform sophisticated writing and dialogs after learning from vast amounts of linguistic data. The success of ChatGPT is phenomenal. AI-based human-machine language interaction has been at the center of AI competition in recent years. The major players in this game have been Google, Meta, and OpenAI. Google was in the best position from the outset, given its invention of Transformer (the cornerstone of all cutting-edge language models) and its significant edge in reinforcement learning. Yet, Google’s efforts in this area were rather diffusing. It kept generating language model variants with incremental innovations but failed to reach the next level. Meta has a strong AI team, including many top AI researchers in the world. Nevertheless, their faith in self-supervised learning to solve human-machine interaction did not deliver high-impact success. Conversely, OpenAI, with a small team, stayed focused on a single product line (GPT, including its latest release of GPT-4). It moved in the right direction of using human input to “align” the language model based on the Reinforcement Learning from Human Feedback (RLHF) approach. The fact that OpenAI ultimately prevailed in this game shows that the model alignment to human labeling through supervised and reinforcement learning is critical for human-machine interaction. However, a chatbot’s actions rely heavily on cues (prompts) provided by human operators. To properly utilize ChatGPT’s capabilities, prompts to instruct or mentor the chatbot must be carefully designed to get valuable, valid, and robust responses. This process becomes another “alignment” problem of using prompt engineering to best probe ChatGPT’s knowledge graph for best serving users’ needs.",358d1d9eed69a6eadcda9996b3f13b0e0a356b88,Semantic Scholar,,highly relevant,"The paper focuses on utilizing prompt learning with both text and audio information for emotion recognition, which aligns with the topic of prompt engineering, specifically demonstrating an application of prompt-based methods." linguistic annotation generation with chatgpt a synthetic dataset of speech functions for discourse annotation of casual conversations,"['Lidiia Ostyakova', 'Kseniia Petukhova', 'Veronika Smilga', 'Dilyara Zharikova']",https://doi.org/10.28995/2075-7182-2023-22-386-403,2023-06-19,,"This paper is devoted to examining the hierarchical and multilayered taxonomy of Speech Functions, encompassing pragmatics, turn-taking, feedback, and topic switching in open-domain conversations. To evaluate the distinctiveness of closely related pragmatic classes, we conducted comparative analyses involving both expert annotators and crowdsourcing workers. We then carried out classification experiments on a manually annotated dataset and a synthetic dataset generated using ChatGPT. We looked into the viability of using ChatGPT to produce data for such complex topics as discourse. Our findings contribute to the field of prompt engineering techniques for linguistic annotation in large language models, offering valuable insights for the development of more sophisticated dialogue systems.",416133943b24dc5122e05d9c7913439a83f2592e,Semantic Scholar,,highly relevant,"The paper introduces 'prompt tuning' to achieve fast adaptation for language embeddings, which is directly related to the topic of prompt engineering." prompt engineering or finetuning a case study on phishing detection with large language models,"['Fouad Trad', 'Ali Chehab']",https://www.mdpi.com/2504-4990/6/1/18/pdf?version=1707208182,2024-02-06,,"Large Language Models (LLMs) are reshaping the landscape of Machine Learning (ML) application development. The emergence of versatile LLMs capable of undertaking a wide array of tasks has reduced the necessity for intensive human involvement in training and maintaining ML models. Despite these advancements, a pivotal question emerges: can these generalized models negate the need for task-specific models? This study addresses this question by comparing the effectiveness of LLMs in detecting phishing URLs when utilized with prompt-engineering techniques versus when fine-tuned. Notably, we explore multiple prompt-engineering strategies for phishing URL detection and apply them to two chat models, GPT-3.5-turbo and Claude 2. In this context, the maximum result achieved was an F1-score of 92.74% by using a test set of 1000 samples. Following this, we fine-tune a range of base LLMs, including GPT-2, Bloom, Baby LLaMA, and DistilGPT-2—all primarily developed for text generation—exclusively for phishing URL detection. The fine-tuning approach culminated in a peak performance, achieving an F1-score of 97.29% and an AUC of 99.56% on the same test set, thereby outperforming existing state-of-the-art methods. These results highlight that while LLMs harnessed through prompt engineering can expedite application development processes, achieving a decent performance, they are not as effective as dedicated, task-specific LLMs.",505e4a7bedadab7f6de006c3c1e1144e272f4695,Semantic Scholar,,highly relevant,"The paper describes a method based on prompt templates for grammar correction, which directly involves designing prompts for a task, thus relevant to prompt engineering." from web catalogs to google a retrospective study of web search engines sustainable development,"['M. Duka', 'Marek Sikora', 'Artur Strzelecki']",https://www.mdpi.com/2071-1050/15/8/6768/pdf?version=1681779086,2023-04-17,,This study presents a review of search engines and search engine optimization and shows how the search engine landscape relates to sustainable development. We have used a narrative review research method and described three main topics: the past and present of web catalogs and search engines; current knowledge about the dominant types of search results presented in Google search; and methods of search engine optimization. Technical elements of important website areas related to technical website auditing are discussed. We summarize our research with several key findings on how web search engines are involved in sustainable development and offer a glimpse into the future use of web searching with the help of artificial intelligence chats and prompt engineering.,513b96c7d5d1f9a74afd9d946d5a7c83fe592869,Semantic Scholar,,highly relevant,"The paper explicitly mentions selecting the best prompt template through an ablation study, which directly involves prompt engineering." better integrating vision and semantics for improving fewshot classification,"['Zhuoling Li', 'Yong Wang']",https://dl.acm.org/doi/pdf/10.1145/3581783.3613819,2023-10-26,,"Some recent methods address few-shot classification by integrating visual and semantic prototypes. However, they usually ignore the difference in feature structure between the visual and semantic modalities, which leads to limited performance improvements. In this paper, we propose a novel method, called bimodal integrator (BMI), to better integrate visual and semantic prototypes. In BMI, we first construct a latent space for each modality via a variational autoencoder, and then align the semantic latent space to the visual latent space. Through this semantics-to-vision alignment, the semantic modality is mapped to the visual latent space and has the same feature structure as the visual modality. As a result, the visual and semantic prototypes can be better integrated. In addition, based on the multivariate Gaussian distribution and the prompt engineering, a data augmentation scheme is designed to ensure the accuracy of modality alignment during the training process. Experimental results demonstrate that BMI significantly improves few-shot classification, making simple baselines outperform the most advanced methods on miniImageNet and tieredImageNet datasets.",579ee305d538a679d72b808ffe8322680561a177,Semantic Scholar,,somewhat relevant,"The paper focuses on prompt learning, specifically exploring a hybrid approach combining discrete and continuous prompts, which is relevant to prompt engineering but not directly focused on hard prefix prompting." omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know,"['Matthias Urban', 'Duc Dat Nguyen', 'Carsten Binnig']",http://publikationen.ub.uni-frankfurt.de/files/74426/06_08.pdf,2023-06-18,,"In this paper, we present our vision of OmniscientDB, a novel database that leverages the implicitly-stored knowledge in large language models to augment datasets for analytical queries or even machine learning tasks. OmiscientDB empowers its users to augment their datasets by means of simple SQL queries and thus has the potential to dramatically reduce the manual overhead associated with data integration. It uses automatic prompt engineering to construct appropriate prompts for given SQL queries and passes them to a large language model like GPT-3 to contribute additional data (i.e., new rows, columns, or entire tables), augmenting the explicitly stored data. Our initial evaluation demonstrates the general feasibility of our vision, explores different prompting techniques in greater detail, and points towards several directions for future research.",59266e06cdb867c2541603f9d94e13f67d55938f,Semantic Scholar,,highly relevant,"The paper discusses the use of varied prompting strategies including simple prompts, templated prompts, in-context learning (ICL), and multi-round iterative questioning for optimizing LLM performance, directly engaging with the concept of prompt engineering." mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models,"['Runa Bhaumik', 'V. Srivastava', 'A. Jalali', 'Shanta Ghosh', 'Ranganathan Chandrasekharan']",https://www.medrxiv.org/content/medrxiv/early/2023/09/26/2023.09.25.23296062.full.pdf,2023-09-26,,"Suicide, a serious public health concern affecting millions of individuals worldwide, refers to the intentional act of ending one's own life. Mental health issues such as depression, frustration, and hopelessness can directly or indirectly influence the emergence of suicidal thoughts. Early identification of these thoughts is crucial for timely diagnosis. In recent years, advances in artificial intelligence (AI) and natural language processing (NLP) have paved the way for revolutionizing mental health support and education. In this proof-of-concept study, we have created MindWatch, a cutting-edge tool that harnesses the power of AI-driven language models to serve as a valuable computer-aided system for the mental health professions to achieve two important goals such as early symptom detection, and personalized psychoeducation. We utilized ALBERT and Bio-Clinical BERT language models and fine-tuned them with the Reddit dataset to build the classifiers. We evaluated the performance of bi-LSTM, ALBERT, Bio-Clinical BERT, OpenAI GPT3.5 (via prompt engineering), and an ensembled voting classifier to detect suicide ideation. For personalized psychoeducation, we used the state-of-the-art Llama 2 foundation model leveraging prompt engineering. The tool is developed in the Amazon Web Service environment. All models performed exceptionally well, with accuracy and precision/recall greater than 92%. ALBERT performed better (AUC=.98) compared to the zero-shot classification accuracies obtained from OpenAI GPT3.5 Turbo (ChatGPT) on hidden datasets (AUC=.91). Furthermore, we observed that the inconclusiveness rate of the Llama 2 model is low while tested for few examples. This study emphasizes how transformer models can help provide customized psychoeducation to individuals dealing with mental health issues. By tailoring content to address their unique mental health conditions, treatment choices, and self-help resources, this approach empowers individuals to actively engage in their recovery journey. Additionally, these models have the potential to advance the automated detection of depressive disorders.",5e01b8383e9260b2e251274a6bad89677cb1bbd3,Semantic Scholar,,somewhat relevant,The paper uses prompting with knowledge entity metadata to improve Knowledge Graph acquisition but focuses on predicate embeddings and does not explicitly mention using hard prefix prompts. the creativity of textbased generative art,['J. Oppenlaender'],http://arxiv.org/pdf/2206.02904,,,"Text-based generation of digital images has made a giant leap to-wards becoming a mainstream phenomenon. With text-based generative systems, anybody can create digital images and artworks. This provokes the question of whether text-based generative art is creative. This paper expounds on the nature of human creativity involved in text-based generative art with a specific focus on the practice of prompt engineering, drawing on Rhodes’s conceptual model of creativity. The paper critiques the current product-centered view of creativity which may fall short in the context of text-based generative art. An case exemplifying this shortcoming is provided and future opportunities for research on text-based generative art are outlined.",65d6c17a5f947a2aa92ab1fa0b876e4e3c75720c,Semantic Scholar,,somewhat relevant,"The paper mentions the use of a 'target prompt template' in an encoder-decoder method, which aligns with the concept of prompt engineering." artificial intelligence model gpt4 narrowly fails simulated radiological protection exam,"['G. Roemer', 'A. Li', 'U. Mahmood', 'L. Dauer', 'M. Bellamy']",https://iopscience.iop.org/article/10.1088/1361-6498/ad1fdf/pdf,2024-01-17,,"This study assesses the efficacy of Generative Pre-Trained Transformers (GPT) published by OpenAI in the specialized domains of radiological protection and health physics. Utilizing a set of 1064 surrogate questions designed to mimic a health physics certification exam, we evaluated the models' ability to accurately respond to questions across five knowledge domains. Our results indicated that neither model met the 67% passing threshold, with GPT-3.5 achieving a 45.3% weighted average and GPT-4 attaining 61.7%. Despite GPT-4's significant parameter increase and multimodal capabilities, it demonstrated superior performance in all categories yet still fell short of a passing score. The study's methodology involved a simple, standardized prompting strategy without employing prompt engineering or in-context learning, which are known to potentially enhance performance. The analysis revealed that GPT-3.5 formatted answers more correctly, despite GPT-4's higher overall accuracy. The findings suggest that while GPT-3.5 and GPT-4 show promise in handling domain-specific content, their application in the field of radiological protection should be approached with caution, emphasizing the need for human oversight and verification. .",67fb64933bb7c3376d13db0812cdd7f579257ed3,Semantic Scholar,,highly relevant,"The paper directly focuses on the manipulation and design of prompts ('jailbreak prompts') to investigate and exploit vulnerabilities in LLMs, aligning closely with prompt engineering." zero and fewshot nlp with pretrained language models,"['Iz Beltagy', 'Arman Cohan', 'Robert Logan IV', 'Sewon Min', 'Sameer Singh']",https://aclanthology.org/2022.acl-tutorials.6.pdf,,,"The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult. This is a challenging setting both academically and practically—particularly because training neutral models typically require large amount of labeled data. More recently, advances in pretraining on unlabelled data have brought up the potential of better zero-shot or few-shot learning (Devlin et al., 2019; Brown et al., 2020). In particular, over the past year, a great deal of research has been conducted to better learn from limited data using large-scale language models. In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for zero- and few-shot learning with pretrained language models. Additionally, our goal is to reveal new research opportunities to the audience, which will hopefully bring us closer to address existing challenges in this domain.",037110f8e99488f9b8f6e962da0a912d927695e5,Semantic Scholar,,highly relevant,"The paper focuses on crafting jailbreak prompts to bypass safeguards in LLMs, which directly relates to prompt engineering, albeit in an adversarial context." speakerbox fewshot learning for speaker identification with transformers,"['Eva Maxfield Brown', 'To Huynh', 'Nicholas Weber']",https://joss.theoj.org/papers/10.21105/joss.05132.pdf,2023-03-20,,"Automated speaker identification is a modeling challenge for research when large-scale corpora, such as audio recordings or transcripts, are relied upon for evidence (e",05555160ff32dc487ffb1ec5048a4f00b1709f79,Semantic Scholar,,highly relevant,"The paper focuses on creating specialized prompts ('jailbreaking prompts') to bypass ethical guidelines of LLMs, which aligns with the use of hard prefix prompting." a generative ai approach to pricing mechanisms and consumer behavior in the electric vehicle charging market,"['Sarthak Chaturvedi', 'Edward W. Chen', 'Ila P. Sharma', 'Omar Isaac Asensio']",https://ojs.aaai.org/index.php/AAAI-SS/article/download/27649/27422,2024-01-22,,"The electrification of transportation is a growing strategy to reduce mobile source emissions and air pollution globally. To encourage adoption of electric vehicles, there is a need for reliable evidence about pricing in pub-lic charging stations that can serve a greater number of communities. However, user-entered pricing information by thousands of charge point operators (CPOs) has created ambiguity for large-scale aggregation, increasing both the cost of analysis for researchers and search costs for consumers. In this paper, we use large language models to address standing challenges with price discovery in distributed digital data. We show that generative AI models can effectively extract pricing mechanisms from unstructured text with high accuracy, and at substantially lower cost of three to four orders of magnitude lower than human curation (USD 0.006 pennies per observation). We exploit the few-shot learning capabilities of GPT-4 with human-in-the-loop feedback—beating prior classification performance benchmarks with fewer training data. The most common pricing models include free, energy-based (per kWh), and time-based (per unit time), with tiered pricing (variable pricing based on usage) being the most prevalent among paid stations. Behavioral insights from a US nationally representative sample of 13,008 stations suggest that EV users are commonly frustrated with the slower than expected charging rates and the total cost of charging. This study uncovers additional consumer barriers to charging services concerning the need for better price standardization.",05c3f80b2048b40db29e3e691f54e690962ec4e7,Semantic Scholar,,highly relevant,"The paper focuses on jailbreak attacks using prompt techniques, such as role-playing scenarios and adversarial examples, which are integral to prompt engineering." metaaugmented prompt tuning for better fewshot learning,"['Kaihang Pan', 'Juncheng Billy Li', 'Hongye Song', 'Jun Lin', 'Xiaozhong Liu', 'Siliang Tang']",http://arxiv.org/pdf/2303.12314,,,"Prompt tuning is a parameter-efficient method, which freezes all PLM parameters and only prepends some additional tunable tokens called soft prompts to the input text. However, soft prompts heavily rely on a better initialization and may easily result in overfitting under few-shot settings, which causes prompt-tuning performing much worse than fine-tuning. To address the above issues, this paper proposes a novel S elf-s U pervised M eta-prompt learning framework with ME ta-gradient R egularization for few-shot generalization ( SUMMER ). We leverage self-supervised meta-learning to better initialize soft prompts and curriculum-based task augmentation is further proposed to enrich the meta-task distribution. Besides, a novel meta-gradient regularization method is integrated into the meta-prompt learning framework, which meta-learns to transform the raw gradient during few-shot learning into a domain- generalizable direction, thus alleviat-ing the problem of overfitting. Extensive experiments show that SUMMER achieves better performance for different few-shot downstream tasks, and also exhibits a stronger domain generalization ability.",0619de4ffded9cd19269c73cde22e6595133bade,Semantic Scholar,,highly relevant,"The paper indicates the use and analysis of 182 prompts to exploit large language models, making it relevant to the study of prompt engineering." exploiting language model prompts using similarity measures a case study on the wordincontext task,"['Mohsen Tabasi', 'Kiamehr Rezaee', 'Mohammad Taher Pilehvar']",https://aclanthology.org/2022.acl-short.36.pdf,,,"As a recent development in few-shot learning, prompt-based techniques have demonstrated promising potential in a variety of natural language processing tasks. However, despite proving competitive on most tasks in the GLUE and SuperGLUE benchmarks, existing prompt-based techniques fail on the semantic distinction task of the Word-in-Context (WiC) dataset. Specifically, none of the existing few-shot approaches (including the in-context learning of GPT-3) can attain a performance that is meaningfully different from the random baseline.Trying to fill this gap, we propose a new prompting technique, based on similarity metrics, which boosts few-shot performance to the level of fully supervised methods. Our simple adaptation shows that the failure of existing prompt-based techniques in semantic distinction is due to their improper configuration, rather than lack of relevant knowledge in the representations. We also show that this approach can be effectively extended to other downstream tasks for which a single prompt is sufficient.",0a0e48c469b124c9a03d4bc841311f59424e97f2,Semantic Scholar,,somewhat relevant,"The paper discusses few-shot prompting of a large language model for semi-supervised sequence generation, which aligns with the topic of prompt engineering." hyperspectral classification of frost damage stress in tomato plants based on fewshot learning,"['Shiwei Ruan', 'Hao Cang', 'Huixin Chen', 'Tianying Yan', 'Fei Tan', 'Yuan Zhang', 'Long Duan', 'Peng Xing', 'Li Guo', 'Pan Gao', 'Wei Xu']",https://www.mdpi.com/2073-4395/13/9/2348/pdf?version=1694248497,2023-09-09,,"Early detection and diagnosis of crop anomalies is crucial for enhancing crop yield and quality. Recently, the combination of machine learning and deep learning with hyperspectral images has significantly improved the efficiency of crop detection. However, acquiring a large amount of properly annotated hyperspectral data on stressed crops requires extensive biochemical experiments and specialized knowledge. This limitation poses a challenge to the construction of large-scale datasets for crop stress analysis. Meta-learning is a learning approach that is capable of learning to learn and can achieve high detection accuracy with limited training samples. In this paper, we introduce meta-learning to hyperspectral imaging and crop detection for the first time. In addition, we gathered 88 hyperspectral images of drought-stressed tomato plants and 68 images of freeze-stressed tomato plants. The data related to drought serve as the source domain, while the data related to frost damage serve as the target domain. Due to the difficulty of obtaining target domain data from real-world testing scenarios, only a limited amount of target domain data and source domain data are used for model training. The results indicated that meta-learning, with a minimum of eight target domain samples, achieved a detection accuracy of 69.57%, precision of 59.29%, recall of 66.32% and F1-score of 62.61% for classifying the severity of frost stress, surpassing other methods with a target domain sample size of 20. Moreover, for determining whether the plants were under stress, meta-learning, with a minimum of four target domain samples, achieved a detection accuracy of 89.1%, precision of 89.72%, recall of 93.08% and F1-score of 91.37% outperforming other methods at a target domain sample size of 20. The results show that meta-learning methods require significantly less data across different domains compared to other methods. The performance of meta-learning techniques thoroughly demonstrates the feasibility of rapidly detecting crop stress without the need for collecting a large amount of target stress data. This research alleviates the data annotation pressure for researchers and provides a foundation for detection personnel to anticipate and prevent potential large-scale stress damage to crops.",0acabdcce3f1f64740b9feb068ca11108b84e369,Semantic Scholar,,highly relevant,"The paper uses a task-specific prompt in conjunction with GPT-4 to synthesize scene graphs, indicating it involves prompt engineering." contextualized soft prompts for extraction of event arguments,"['Chien Van Nguyen', 'Hieu Man', 'Thien Huu Nguyen']",https://aclanthology.org/2023.findings-acl.266.pdf,,,"Event argument extraction (EAE) is a sub-task of event extraction where the goal is to identify roles of entity mentions for events in text. The current state-of-the-art approaches for this problem explore prompt-based meth-ods to prompt pre-trained language models for arguments over input context. However, existing prompt-based methods mainly rely on discrete and manually-designed prompts that cannot exploit specific context for each example to improve customization for optimal performance. In addition, the discrete nature of current prompts prevents the incorporation of relevant context from multiple external documents to enrich prompts for EAE. To this end, we propose a novel prompt-based method for EAE that introduces soft prompts to facilitate the encoding of individual example context and multiple relevant documents to boost EAE. We extensively evaluate the proposed method on benchmark datasets for EAE to demonstrate its benefits with state-of-the-art performance.",1f79ec669e3b6701c814d0165ad281796a49bd13,Semantic Scholar,,highly relevant,"The paper describes using progressive prompting augmentation with LLMs for knowledge graph construction, clearly involving prompt engineering techniques." promptbased approach for czech sentiment analysis,"['Jakub Šmíd', 'P. Přibáň']",https://doi.org/10.26615/978-954-452-092-2_118,,,"This paper introduces the first prompt-based methods for aspect-based sentiment analysis and sentiment classification in Czech. We employ the sequence-to-sequence models to solve the aspect-based tasks simultaneously and demonstrate the superiority of our prompt-based approach over traditional fine-tuning. In addition, we conduct zero-shot and few-shot learning experiments for sentiment classification and show that prompting yields significantly better results with limited training examples compared to traditional fine-tuning. We also demonstrate that pre-training on data from the target domain can lead to significant improvements in a zero-shot scenario.",535ae2b443c63f35b462257179480dc5ca67e206,Semantic Scholar,,highly relevant,"The paper focuses on prompt injections and their impacts on large language models, which directly involves the topic of prompt engineering." gpts at factify 2022 prompt aided factverification (short paper),"['Pawan Kumar Sahu', 'Saksham Aggarwal', 'Taneesh Gupta', 'Gyanendra Das']",http://arxiv.org/pdf/2206.14913,2022-06-29,,"One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle the problem, fact verification becomes crucial and thus has been a topic of interest among diverse research communities. Using only the textual form of data we propose our solution to the problem and achieve competitive results with other approaches. We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method. The PLM-based approach uses the traditional supervised learning, where the model is trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas, Prompt-based learning reflects the idea to design input to fit the model such that the original objective may be re-framed as a problem of (masked) language modeling. We may further stimulate the rich knowledge provided by PLMs to better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our experiments showed that the proposed method performs better than just fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and a 7th position on the competition leader-board.",c96a8150c82a0ce9c8c1e069590f534939a30038,Semantic Scholar,,highly relevant,"The paper focuses on using LLMs to refine and optimize prompts, which directly pertains to the development and improvement of prompt engineering techniques." vpn variation on prompt tuning for namedentity recognition,"['Niu Hu', 'Xu Zhou', 'Bing Xu', 'Han Liu', 'Xiangjin Xie', 'Haitao Zheng']",https://www.mdpi.com/2076-3417/13/14/8359/pdf?version=1689827421,2023-07-19,,"Recently, prompt-based methods have achieved a promising performance in many natural language processing benchmarks. Despite success in sentence-level classification tasks, prompt-based methods work poorly in token-level tasks, such as named entity recognition (NER), due to the sophisticated design of entity-related templates. Note that the nature of prompt tuning makes full use of the parameters of the mask language model (MLM) head, while previous methods solely utilized the last hidden layer of language models (LMs) and the power of the MLM head is overlooked. In this work, we discovered the characteristics of semantic feature changes in samples after being processed using MLMs. Based on this characteristic, we designed a prompt-tuning variant for NER tasks. We let the pre-trained model predict the label words derived from the training dataset at each position and fed the generated logits (non-normalized probability) to the CRF layer. We evaluated our method on three popular datasets, and the experiments showed that our proposed method outperforms the state-of-the-art model in all three Chinese datasets.",dc2aba63037ba3e1d6912170f5c292c89ca70b09,Semantic Scholar,,highly relevant,"The paper focuses on the effect of prompt engineering, specifically introducing an Automatic Prompt Optimization framework, which is directly related to hard prefix prompting." investigating prompt learning for chinese fewshot text classification with pretrained language models,"['Chengyu Song', 'Taihua Shao', 'Kejing Lin', 'Dengfeng Liu', 'Siyuan Wang', 'Honghui Chen']",https://www.mdpi.com/2076-3417/12/21/11117/pdf?version=1667385041,2022-11-02,,"Text classification aims to assign predefined labels to unlabeled sentences, which tend to struggle in real-world applications when only a few annotated samples are available. Previous works generally focus on using the paradigm of meta-learning to overcome the classification difficulties brought by insufficient data, where a set of auxiliary tasks is given. Accordingly, prompt-based approaches are proposed to deal with the low-resource issue. However, existing prompt-based methods mainly focus on English tasks, which generally apply English pretrained language models that can not directly adapt to Chinese tasks due to structural and grammatical differences. Thus, we propose a prompt-based Chinese text classification framework that uses generated natural language sequences as hints, which can alleviate the classification bottleneck well in low-resource scenarios. In detail, we first design a prompt-based fine-tuning together with a novel pipeline for automating prompt generation in Chinese. Then, we propose a refined strategy for dynamically and selectively incorporating demonstrations into each context. We present a systematic evaluation for analyzing few-shot performance on a wide range of Chinese text classification tasks. Our approach makes few assumptions about task resources and expertise and therefore constitutes a powerful, task-independent approach for few-shot learning.",eb4afff0eca0026fcc26a5f0c8a73184485e3a25,Semantic Scholar,,highly relevant,"The paper focuses on using robust prompt optimization to defend LMs against jailbreaking, which directly involves modifying input prompts, making it highly relevant to prompt engineering." use of large language models as a scalable approach to understanding public health discourse,"['L. Espinosa', 'M. Salathe']",https://www.medrxiv.org/content/medrxiv/early/2024/02/06/2024.02.06.24302383.full.pdf,2024-02-06,,"Online public health discourse is becoming more and more important in shaping public health dynamics. Large Language Models (LLMs) offer a scalable solution for analysing the vast amounts of unstructured text found on online platforms. Here, we explore the effectiveness of Large Language Models (LLMs), including GPT models and open-source alternatives, for extracting public stances towards vaccination from social media posts. Using an expert-annotated dataset of social media posts related to vaccination, we applied various LLMs and a rule-based sentiment analysis tool to classify the stance towards vaccination. We assessed the accuracy of these methods through comparisons with expert annotations and annotations obtained through crowdsourcing. Our results demonstrate that few-shot prompting of best-in-class LLMs are the best performing methods, and that all alternatives have significant risks of substantial misclassification. The study highlights the potential of LLMs as a scalable tool for public health professionals to quickly gauge public opinion on health policies and interventions, offering an efficient alternative to traditional data analysis methods. With the continuous advancement in LLM development, the integration of these models into public health surveillance systems could substantially improve our ability to monitor and respond to changing public health attitudes.",144001be42e0a97dff651126841ebcd70d6c0f01,Semantic Scholar,,highly relevant,"The paper focuses on optimizing hard prompt tuning (HPT) for pretrained language models, which directly relates to the topic of hard prefix prompting." can language models understand physical concepts,"['Lei Li', 'Jingjing Xu', 'Qingxiu Dong', 'Ce Zheng', 'Qi Liu', 'Lingpeng Kong', 'Xu Sun']",http://arxiv.org/pdf/2305.14057,2023-05-23,,"Language models~(LMs) gradually become general-purpose interfaces in the interactive and embodied world, where the understanding of physical concepts is an essential prerequisite. However, it is not yet clear whether LMs can understand physical concepts in the human world. To investigate this, we design a benchmark VEC that covers the tasks of (i) Visual concepts, such as the shape and material of objects, and (ii) Embodied Concepts, learned from the interaction with the world such as the temperature of objects. Our zero (few)-shot prompting results show that the understanding of certain visual concepts emerges as scaling up LMs, but there are still basic concepts to which the scaling law does not apply. For example, OPT-175B performs close to humans with a zero-shot accuracy of 85\% on the material concept, yet behaves like random guessing on the mass concept. Instead, vision-augmented LMs such as CLIP and BLIP achieve a human-level understanding of embodied concepts. Analysis indicates that the rich semantics in visual representation can serve as a valuable source of embodied knowledge. Inspired by this, we propose a distillation method to transfer embodied knowledge from VLMs to LMs, achieving performance gain comparable with that by scaling up the parameters of LMs 134x. Our dataset is available at \url{https://github.com/TobiasLee/VEC}",1caa2a29d3ca38d0e5111f4f9ae140727bb7d567,Semantic Scholar,,highly relevant,"The paper is a comprehensive exploration of the evolution of prompt engineering, directly addressing advancements and techniques relevant to the topic." fewshot prompting towards controllable response generation,"['Hsuan Su', 'Po-Han Chi', 'Shih-Cheng Huang', 'Chung Ho Lam', 'Saurav Sahay', 'Shang-Tse Chen', 'Hung-yi Lee']",http://arxiv.org/pdf/2206.03931,,,"Much literature has shown that prompt-based learning is an efficient method to make use of the large pre-trained language model. Recent works also exhibit the possibility of steering a chatbot’s output by plugging in an ap-propriate prompt. Gradient-based methods are often used to perturb the prompts. However, some language models are not even available to the public. In this work, we first explored the combination of prompting and reinforcement learning (RL) to steer models’ generation without accessing any of the models’ parameters. Second, to reduce the training effort and enhance the generalizability to the unseen task, we apply multi-task learning to make the model learn to generalize to new tasks better. The experiment results show that our proposed method can successfully control several state-of-the-art (SOTA) dialogue models without accessing their parameters. Furthermore, the model demonstrates the strong ability to quickly adapt to an unseen task in fewer steps than the baseline model.",308a59020d320f620f34f96c9ecdc187baff9fa1,Semantic Scholar,,somewhat relevant,"The paper discusses using prompt engineering to automate Software Engineering tasks with LLMs, which directly aligns with the subject of prompt engineering." “covid vaccine is against covid but oxford vaccine is made at oxford!” semantic interpretation of proper noun compounds,"['Keshav Kolluru', 'Gabriel Stanovsky', 'Mausam']",http://arxiv.org/pdf/2210.13039,2022-10-24,,"Proper noun compounds, e.g., “Covid vaccine”, convey information in a succinct manner (a “Covid vaccine” is a “vaccine that immunizes against the Covid disease”). These are commonly used in short-form domains, such as news headlines, but are largely ignored in information-seeking applications. To address this limitation, we release a new manually annotated dataset, ProNCI, consisting of 22.5K proper noun compounds along with their free-form semantic interpretations. ProNCI is 60 times larger than prior noun compound datasets and also includes non-compositional examples, which have not been previously explored. We experiment with various neural models for automatically generating the semantic interpretations from proper noun compounds, ranging from few-shot prompting to supervised learning, with varying degrees of knowledge about the constituent nouns. We find that adding targeted knowledge, particularly about the common noun, results in performance gains of upto 2.8%. Finally, we integrate our model generated interpretations with an existing Open IE system and observe an 7.5% increase in yield at a precision of 85%. The dataset and code are available at https://github.com/dair-iitd/pronci.",33285e02758788b681754d283df20971fef6e31f,Semantic Scholar,,highly relevant,"The paper explicitly discusses prompt design and engineering, introducing core concepts and advanced techniques, which directly aligns with the topic of prompt engineering." multilingual social media text generation and evaluation with fewshot prompting,['Mack Blackburn'],https://aclanthology.org/2022.gem-1.39.pdf,,,"This work adapts large language models to generate multilingual social media text that meets several objectives simultaneously: topic relevance, author style consistency, and reply validity. Leveraging existing online information behavior simulators, which currently only forecast activities but not content, our approach comprised of generalizable prompt formation and efficient evaluation to produce a believable, personalized, and responsive synthetic social network. According to some preliminary experiments, our multi-objective prompt formation and automatic evaluation/selection methods are able to yield a significant number of high-quality synthetic texts according to both standardized and trained metrics.",36731d3f9809535d5f57cc5cd610d92428a50716,Semantic Scholar,,highly relevant,"The paper focuses on generating high-quality prompts for diffusion-based text-to-image models, directly relevant to prompt engineering." continued pretraining for better zero and fewshot promptability,"['Zhaofeng Wu', 'IV RobertL.Logan', 'Pete Walsh', 'Akshita Bhagia', 'Dirk Groeneveld', 'Sameer Singh', 'Iz Beltagy']",http://arxiv.org/pdf/2210.10258,2022-10-19,,"Recently introduced language model prompting methods can achieve high accuracy in zero- and few-shot settings while requiring few to no learned task-specific parameters. Nevertheless, these methods still often trail behind full model finetuning. In this work, we investigate if a dedicated continued pretraining stage could improve “promptability”, i.e., zero-shot performance with natural language prompts or few-shot performance with prompt tuning. We reveal settings where existing continued pretraining methods lack promptability. We also identify current methodological gaps, which we fill with thorough large-scale experiments. We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31% relative. On the other hand, we find that continued pretraining using MAML-style meta-learning, a method that directly optimizes few-shot promptability, yields subpar performance. We validate our findings with two prompt tuning methods, and, based on our results, we provide concrete recommendations to optimize promptability for different use cases.",53868a2a4caea7afc487ef08993372b186fb2ddb,Semantic Scholar,,highly relevant,"The paper explicitly mentions using prompt engineering with Large Language Models for reinforcement learning, indicating relevance to prompt engineering." datatotext generation for severely underresourced languages with gpt35 a bit of help needed from google translate (webnlg 2023),"['Michela Lorandi', 'Anya Belz']",https://arxiv.org/pdf/2308.09957,2023-08-19,,"LLMs are great at tasks involving English which dominates in their training data. We explore their ability to address tasks involving languages that are severely under-represented in their training data. More specifically, we do this in the context of data-to-text generation for Irish, Maltese, Welsh and Breton. During the prompt-engineering phase we tested GPT-3.5 and~4 with a range of prompt types and formats on a small sample of example input/output pairs. We then fully evaluated the two most promising prompts in two scenarios: (i) direct generation into the under-resourced languages, and (ii) generation into English followed by translation into the under-resourced languages. We find that few-shot prompting works better for direct generation into under-resourced languages, but that the difference disappears when pivoting via English. The few-shot + translation system variants were submitted to the WebNLG 2023 shared task where they outperformed all other systems by substantial margins in all languages on all automatic metrics. We conclude that good performance can be achieved with state-of-the-art LLMs out-of-the box for under-resourced languages. However, best results (for Welsh) of BLEU 25.12, ChrF++ 0.55, and TER 0.64 are well below the lowest ranked English system at WebNLG’20 with BLEU 0.391, ChrF++ 0.579, and TER 0.564.",842f79c5acab440f8d7a592201738a3e854a5186,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of prompt engineering to predict travel behavior with LLMs, aligning closely with the topic." the adaio system at the bea2023 shared task shared task generating ai teacher responses in educational dialogues,"['Adaeze Adigwe', 'Zheng Yuan']",http://arxiv.org/pdf/2306.05360,2023-06-08,,"This paper presents the ADAIO team’s system entry in the Building Educational Applications (BEA) 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues. The task aims to assess the performance of state-of-the-art generative models as AI teachers in producing suitable responses within a student-teacher dialogue. Our system comprises evaluating various baseline models using OpenAI GPT-3 and designing diverse prompts to prompt the OpenAI models for teacher response generation. After the challenge, our system achieved second place by employing a few-shot prompt-based approach with the OpenAI text-davinci-003 model. The results highlight the few-shot learning capabilities of large-language models, particularly OpenAI’s GPT-3, in the role of AI teachers.",97d9d728f924c1f6cc085844136a481cac07c4b0,Semantic Scholar,,highly relevant,"The paper directly mentions 'prompt engineering for LLM utilization,' indicating a focus on designing prompts for Large Language Models, which aligns with the topic of prompt engineering." does gpt3 grasp metaphors identifying metaphor mappings with generative language models,"['Lennart Wachowiak', 'Dagmar Gromann']",https://aclanthology.org/2023.acl-long.58.pdf,,,"Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain. Prior neural approaches focus on detecting whether natural language sequences are metaphoric or literal. We believe that to truly probe metaphoric knowledge in pre-trained language models, their capability to detect this transfer should be investigated. To this end, this paper proposes to probe the ability of GPT-3 to detect metaphoric language and predict the metaphor’s source domain without any pre-set domains. We experiment with different training sample configurations for fine-tuning and few-shot prompting on two distinct datasets. When provided 12 few-shot samples in the prompt, GPT-3 generates the correct source domain for a new sample with an accuracy of 65.15% in English and 34.65% in Spanish. GPT’s most common error is a hallucinated source domain for which no indicator is present in the sentence. Other common errors include identifying a sequence as literal even though a metaphor is present and predicting the wrong source domain based on specific words in the sequence that are not metaphorically related to the target domain.",b31fb03a86cd44860f1c38e5c7032d9aed10d2f2,Semantic Scholar,,somewhat relevant,"The paper discusses improving code generation with LLMs using a new approach called AlphaCodium, which implies using specific prompting strategies, thus it is relevant to prompt engineering." a comparative study of prompting strategies for legal text classification,"['Ali Hakimi Parizi', 'Yuyang Liu', 'Prudhvi Nokku', 'Sina Gholamian', 'David Emerson']",https://aclanthology.org/2023.nllp-1.25.pdf,,,"In this study, we explore the performance oflarge language models (LLMs) using differ-ent prompt engineering approaches in the con-text of legal text classification. Prior researchhas demonstrated that various prompting tech-niques can improve the performance of a di-verse array of tasks done by LLMs. However,in this research, we observe that professionaldocuments, and in particular legal documents,pose unique challenges for LLMs. We experi-ment with several LLMs and various promptingtechniques, including zero/few-shot prompting,prompt ensembling, chain-of-thought, and ac-tivation fine-tuning and compare the perfor-mance on legal datasets. Although the newgeneration of LLMs and prompt optimizationtechniques have been shown to improve gener-ation and understanding of generic tasks, ourfindings suggest that such improvements maynot readily transfer to other domains. Specifi-cally, experiments indicate that not all prompt-ing approaches and models are well-suited forthe legal domain which involves complexitiessuch as long documents and domain-specificlanguage.",b6511d2cad195b4e595737f080031647296136f6,Semantic Scholar,,highly relevant,"The paper focuses on using prompt engineering to enhance chatbots' conversational abilities in healthcare, mentioning the use of a three-category prompt dictionary and prompt improvement mechanism." majority rule better patching via selfconsistency,"['Toufique Ahmed', 'Premkumar T. Devanbu']",https://arxiv.org/pdf/2306.00108,,,"—Large Language models (LLMs) can be induced to solve non-trivial problems with “few-shot” prompts including illustrative problem-solution examples. Now if the few-shots also include “chain of thought” ( C oT ) explanations, which are of the form problem-explanation-solution , LLMs will generate a “explained” solution, and perform even better. Recently an exciting, substantially better technique, self-consistency [1] ( S - C ) has emerged, based on the intuition that there are many plausible explanations for the right solution; when the LLM is sampled repeatedly to generate a pool of explanation-solution pairs, for a given problem, the most frequently occurring solutions in the pool (ignoring the explanations ) tend to be even more likely to be correct!Unfortunately, the use of this highly-performant S - C (or even C oT ) approach in software engineering settings is hampered by the lack of explanations ; most software datasets lack explanations. In this paper, we describe an application of the S - C approach to program repair, using the commit log on the fix as the explanation, only in the illustrative few-shots. We achieve state-of-the art results, beating previous approaches to prompting-based program repair, on the MODIT dataset; we also find evidence suggesting that the correct commit messages are helping the LLM learn to produce better patches.",c1a3dc24a2677b2c8a69ffd336b2112e1aa705b6,Semantic Scholar,,highly relevant,"The paper details the development process showing how prompt-engineering can optimize large language models for educational contexts, indicating a direct application of prompt engineering." an evaluation of log parsing with chatgpt,"['Van-Hoang Le', 'Hongyu Zhang']",https://arxiv.org/pdf/2306.01590,,,"—Software logs play an essential role in ensuring the reliability and maintainability of large-scale software systems, as they are often the sole source of runtime information. Log parsing, which converts raw log messages into structured data, is an important initial step towards downstream log analytics. In recent studies, ChatGPT, the current cutting-edge large language model (LLM), has been widely applied to a wide range of software engineering tasks. However, its performance in automated log parsing remains unclear. In this paper, we evaluate ChatGPT’s ability to undertake log parsing by addressing two research questions. (1) Can ChatGPT effectively parse logs? (2) How does ChatGPT perform with different prompting methods? Our results show that ChatGPT can achieve promising results for log parsing with appropriate prompts, especially with few-shot prompting. Based on our findings, we outline several challenges and opportunities for ChatGPT-based log parsing.",c7f0c31bd260ccafd6995350f30707b3cf03ce9e,Semantic Scholar,,highly relevant,"The paper describes the use of an iterative prompt refinement process involving LLMs for symptom extraction, indicating a direct application of prompt engineering techniques." "problematic webpage identification a trilogy of hatespeech, search engines and gpt","['Ojasvin Sood', 'Sandipan Dandapat']",https://aclanthology.org/2023.woah-1.13.pdf,,,"In this paper, we introduce a fine-tuned transformer-based model focused on problematic webpage classification to identify webpages promoting hate and violence of various forms. Due to the unavailability of labelled problematic webpage data, first we propose a novel webpage data collection strategy which leverages well-studied short-text hate speech datasets. We have introduced a custom GPT-4 few-shot prompt annotation scheme taking various webpage features to label the prohibitively expensive webpage annotation task. The resulting annotated data is used to build our problematic webpage classification model. We report the accuracy (87.6% F1-score) of our webpage classification model and conduct a detailed comparison of it against other state-of-the-art hate speech classification model on problematic webpage identification task. Finally, we have showcased the importance of various webpage features in identifying a problematic webpage.",cb9c917af837d016b5977b9f158a713e1318e039,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of specialized prompts to fine-tune LLMs for depression detection and treatment, making it highly relevant to prompt engineering." towards expert systems for improved customer services using chatgpt as an inference engine,['C. P. Ezenkwu'],https://rgu-repository.worktribe.com/preview/1987218/EZENKWU%202023%20Towards%20expert%20systems%20%28AAM%29.pdf,2023-07-14,,"By harnessing both implicit and explicit customer data, companies can develop a more comprehensive understanding of their consumers, leading to better customer engagement and experience, and improved loyalty. As a result, businesses have embraced many AI technologies, including chatbots, sentiment analysis, voice assistants, predictive analytics, and natural language processing, within customer services and e-commerce. The arrival of ChatGPT, a state-of-the-art deep learning model trained with general knowledge in mind, has brought about a paradigm shift in how companies approach AI applications. However, given that most business problems are bespoke and require specialised domain expertise, ChatGPT needs to be aligned with the requisite task-oriented ability to solve these issues. This paper presents an iterative procedure that incorporates expert system development process models and prompt engineering, in the design of descriptive knowledge and few-shot prompts, as are necessary for ChatGPT-powered expert systems applications within customer services. Furthermore, this paper explores potential application areas for ChatGPT-powered expert systems in customer services, presenting opportunities for their effective utilisation in the business sector.",cc5869343d670c801512de910ab3bf0ca7bc5c4a,Semantic Scholar,,highly relevant,"The paper discusses integrating task context and user perceptions into human-ChatGPT interactions through prompt engineering, specifically to improve the initialization and refinement of prompts, which aligns with the topic of prompt engineering." utilizing language models to expand visionbased commonsense knowledge graphs,"['Navid Rezaei', 'M. Reformat']",https://www.mdpi.com/2073-8994/14/8/1715/pdf?version=1660727694,2022-08-17,,"The introduction and ever-growing size of the transformer deep-learning architecture have had a tremendous impact not only in the field of natural language processing but also in other fields. The transformer-based language models have contributed to a renewed interest in commonsense knowledge due to the abilities of deep learning models. Recent literature has focused on analyzing commonsense embedded within the pre-trained parameters of these models and embedding missing commonsense using knowledge graphs and fine-tuning. We base our current work on the empirically proven language understanding of very large transformer-based language models to expand a limited commonsense knowledge graph, initially generated only on visual data. The few-shot-prompted pre-trained language models can learn the context of an initial knowledge graph with less bias than language models fine-tuned on a large initial corpus. It is also shown that these models can offer new concepts that are added to the vision-based knowledge graph. This two-step approach of vision mining and language model prompts results in the auto-generation of a commonsense knowledge graph well equipped with physical commonsense, which is human commonsense gained by interacting with the physical world. To prompt the language models, we adapted the chain-of-thought method of prompting. To the best of our knowledge, it is a novel contribution to the domain of the generation of commonsense knowledge, which can result in a five-fold cost reduction compared to the state-of-the-art. Another contribution is assigning fuzzy linguistic terms to the generated triples. The process is end to end in the context of knowledge graphs. It means the triples are verbalized to natural language, and after being processed, the results are converted back to triples and added to the commonsense knowledge graph.",cc7df8fa3b642269531c25af065c2cc78e5000e0,Semantic Scholar,,highly relevant,"The paper mentions the utilization of prompt engineering to exploit Large Language Models (LLMs) for recovering purified examples, directly indicating its relevance to prompt engineering." naisteacher a prompt and rerank approach to generating teacher utterances in educational dialogues,"['Justin Vasselli', 'Christopher Vasselli', 'Adam Nohejl', 'Taro Watanabe']",https://aclanthology.org/2023.bea-1.63.pdf,,,"This paper presents our approach to the BEA 2023 shared task of generating teacher responses in educational dialogues, using the Teacher-Student Chatroom Corpus. Our system prompts GPT-3.5-turbo to generate initial suggestions, which are then subjected to reranking. We explore multiple strategies for candidate generation, including prompting for multiple candidates and employing iterative few-shot prompts with negative examples. We aggregate all candidate responses and rerank them based on DialogRPT scores. To handle consecutive turns in the dialogue data, we divide the task of generating teacher utterances into two components: teacher replies to the student and teacher continuations of previously sent messages. Through our proposed methodology, our system achieved the top score on both automated metrics and human evaluation, surpassing the reference human teachers on the latter.",d0482bd01de9d0912acf4e5338c7799eba4b9360,Semantic Scholar,,somewhat relevant,"The abstract mentions 'prompt engineering complexity' as one of the limitations in current LLM benchmarks, indicating that it addresses issues related to prompt engineering." mdc at biolaysumm task 1 evaluating gpt models for biomedical lay summarization,"['Oisn Turbitt', 'R. Bevan', 'Mouhamad Aboshokor']",https://aclanthology.org/2023.bionlp-1.65.pdf,,,"This paper presents our approach to the BioLaySumm Task 1 shared task, held at the BioNLP 2023 Workshop. The effective communication of scientific knowledge to the general public is often limited by the technical language used in research, making it difficult for non-experts to comprehend. To address this issue, lay summaries can be used to explain research findings to non-experts in an accessible form. We conduct an evaluation of autoregressive language models, both general and specialized for the biomedical domain, to generate lay summaries from biomedical research article abstracts. Our findings demonstrate that a GPT-3.5 model combined with a straightforward few-shot prompt produces lay summaries that achieve significantly relevance and factuality compared to those generated by a fine-tuned BioGPT model. However, the summaries generated by the BioGPT model exhibit better readability. Notably, our submission for the shared task achieved 1st place in the competition.",e4e65df11e4d063199c6035004be2b28c3e2f82f,Semantic Scholar,,highly relevant,"The paper discusses developing a prompt engineering approach for mask generation in cell segmentation, indicating direct relevance to the topic of prompt engineering." leveraging large language models for mental health prediction via online text data,"['Xuhai Xu', 'Bingsheng Yao', 'Yuanzhe Dong', 'Hong Yu', 'James A. Hendler', 'A. Dey', 'Dakuo Wang']",https://arxiv.org/pdf/2307.14385,,,"The recent technology boost of large language models (LLMs) has empowered a variety of applications. However, there is very little research on understanding and improving LLMs’ capability for the mental health domain. In this work, we present the first comprehensive evaluation of multiple LLMs, including Alpaca, Alpaca-LoRA, and GPT-3.5, on various mental health prediction tasks via online text data. We conduct a wide range of experiments, covering zero-shot prompting, few-shot prompting, and instruction finetuning. The results indicate the promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned model, Mental-Alpaca, outperforms GPT-3.5 (25 times bigger) by 16.7% on balanced accuracy and performs on par with the state-of-the-art task-specific model. We summarize our findings into a set of action guidelines for future researchers, engineers, and practitioners on how to empower LLMs with better mental health domain knowledge and become an expert in mental health prediction tasks.",ea284d2045672daf44deffa3f0b7ce154630424c,Semantic Scholar,,highly relevant,"The paper directly discusses the use of customized prompts and their formulation for improving LLM performance in medical reporting, indicating a focus on prompt engineering." summqa at mediqachat 2023 incontext learning with gpt4 for medical summarization,"['Yash Mathur', 'Sanketh Rangreji', 'Raghav Kapoor', 'Medha Palavalli', 'Amanda Bertsch', 'Matthew R. Gormley']",http://arxiv.org/pdf/2306.17384,2023-06-30,,"Medical dialogue summarization is challenging due to the unstructured nature of medical conversations, the use of medical terminologyin gold summaries, and the need to identify key information across multiple symptom sets. We present a novel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA 2023 Shared Task. Our approach for sectionwise summarization (Task A) is a two-stage process of selecting semantically similar dialogues and using the top-k similar dialogues as in-context examples for GPT-4. For full-note summarization (Task B), we use a similar solution with k=1. We achieved 3rd place in Task A (2nd among all teams), 4th place in Task B Division Wise Summarization (2nd among all teams), 15th place in Task A Section Header Classification (9th among all teams), and 8th place among all teams in Task B. Our results highlight the effectiveness of few-shot prompting for this task, though we also identify several weaknesses of prompting-based approaches. We compare GPT-4 performance with several finetuned baselines. We find that GPT-4 summaries are more abstractive and shorter. We make our code publicly available.",ebb3d299213bae89b5d302cc3dfc36573ec83956,Semantic Scholar,,highly relevant,"The paper describes the integration of prompt engineering to enhance pedagogical methods through flipped classroom techniques, indicating the practical application of hard prefix prompting." ds4dh at mediqachat 2023 leveraging svm and gpt3 prompt engineering for medical dialogue classification and summarization,"['Boya Zhang', 'R. Mishra', 'D. Teodoro']",https://access.archive-ouverte.unige.ch/access/metadata/290c4289-0017-45ec-baa9-ff2fdd7948f9/download,2023-06-12,,"This paper presents the results of the Data Science for Digital Health (DS4DH) group in the MEDIQA-Chat Tasks at ACL-ClinicalNLP 2023. Our study combines the power of a classical machine learning method, Support Vector Machine, for classifying medical dialogues, along with the implementation of one-shot prompts using GPT-3.5. We employ dialogues and summaries from the same category as prompts to generate summaries for novel dialogues. Our findings exceed the average benchmark score, offering a robust reference for assessing performance in this field.",cd902673a9396b63fdaf2cf7e0e1ce25cc3c545c,Semantic Scholar,,highly relevant,"The paper discusses employing six prompt engineering strategies in combination with large language models for automatic scoring, directly addressing prompt engineering in its methodology." pengaruh model problem based instruction dipadu dengan teknik probing prompting terhadap kemampuan berpikir kritis dan hasil belajar kognitif,"['Wahyu Dewi Siskayanti', 'Siti Nurhidayati', 'Safnowandi Safnowandi']",https://e-journal.lp3kamandanu.com/index.php/panthera/article/download/76/130,2022-04-30,,"Based on the results of observations and interviews at MTs. NW Pengadang found that students' absorption in biology subjects was still lacking. This study aims to determine the effect of the Problem based instruction model combined with the Probing Prompting technique on critical thinking skills and cognitive learning outcomes of class VII MTs Biology students NW Pengadang for the 2017/2018 school year. The population of this study were all seventh grade students in the even semesters of MTs. NW Pengadang for the 2017/2018 school year. The samples in this study were students of class VII A as the experimental class using the Problem based instruction model with the Probing Prompting technique and class VII B students as the control class using the lecture and discussion method. Collecting critical thinking ability data using tests in the form of description questions. Measurement of students' cognitive learning outcomes using multiple-choice tests. The data analysis technique used descriptive statistics, namely the One Way ANOVA test with a significant level of 5%. Based on the results of the research on critical thinking skills, the control class has an average score of 53.24 in the less critical category, the experimental class has an average value of 67.18 in the critical category, the cognitive learning outcomes of the control class have an average value of 77, the experimental class has an average value 78.14. The results of hypothesis testing using One Way ANOVA with the help of SPSS stated that critical thinking ability has a value of > = 21,298 > 4,030 and for cognitive learning outcomes it has a value of > = 8,991 > 4,030. The results of the study can be concluded that: There is an effect of the Problem Based Instruction model combined with the Probing Prompting technique on the critical thinking ability and cognitive learning outcomes of the VII grade MTs Biology students NW Pengadang for the 2017/2018 school year.",09d880b59dc309ab5203f232fc84b6bf255ef190,Semantic Scholar,,somewhat relevant,"The paper mentions 'prompt engineering techniques' in the context of comparing AgentCoder's performance, implying the use of prompting methods in their approach." application of problem based learning approaches with probingprompting techniques to improve students' adaptive reasoning capabilities,"['N. Gardenia', 'T. Herman', 'Andri Rahadyan', 'T. Dahlan']",http://eudl.eu/pdf/10.4108/eai.12-10-2019.2296525,,,"This study aims to obtain an overview of the adaptive reasoning abilities of students who get mathematics learning through the Problem Based Learning approach with Probing-Prompting techniques compared to students who get conventional learning. The problems underlying this research include the adaptive reasoning ability of students in Indonesia is still low so innovation is needed in learning that can develop students' adaptive reasoning abilities. This research is a quasi-experimental research. Data obtained through research instruments in the form of tests and non-tests. Data analysis was carried out quantitatively. Quantitative analysis was performed by calculating the N-gaint using the normality test, and the Mann-Whitney U test. The results showed an increase in students' adaptive reasoning abilities in both groups, an increase in the adaptive reasoning abilities of students who obtained mathematics learning through the Problem Based Learning approach with Probing-Prompting is better than students who get conventional learning.",113c447e2cb21ef9b923a5e97922022a23c9e846,Semantic Scholar,,highly relevant,"The study emphasizes the significance of prompt engineering in the context of using large language models for medical term classification, directly relating to the topic." an investigation of applying large language models to spoken language learning,"['Yingming Gao', 'Baorian Nuchged', 'Ya Li', 'Linkai Peng']",https://www.mdpi.com/2076-3417/14/1/224/pdf?version=1703665691,2023-12-26,,"People have long desired intelligent conversational systems that can provide assistance in practical scenarios. The latest advancements in large language models (LLMs) are making significant strides toward turning this aspiration into a tangible reality. LLMs are believed to hold the most potential and value in education, especially in the creation of AI-driven virtual teachers that facilitate language learning. This study focuses on assessing the effectiveness of LLMs within the educational domain, specifically in the areas of spoken language learning, which encompass phonetics, phonology, and second language acquisition. To this end, we first introduced a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios, including the understanding and application of spoken language knowledge. Moreover, we investigated the influence of various prompting techniques such as zero- and few-shot methods (prepending the question with question-answer exemplars), chain-of-thought (CoT) prompting, in-domain exemplars, and external tools. We conducted a comprehensive evaluation of popular LLMs (20 distinct models) using these methods. The experimental results showed that the task of extracting conceptual knowledge posed few challenges for these LLMs, whereas the task of application questions was relatively difficult. In addition, some widely proven effective prompting methods combined with domain-specific examples resulted in significant performance improvements compared to the zero-shot baselines. Additionally, some other preliminary experiments also demonstrated the strengths and weaknesses of different LLMs. The findings of this study can shed light on the application of LLMs to spoken language learning.",13eacc692aeb58c7987c535c439eeb345076bea2,Semantic Scholar,,highly relevant,"The paper mentions the use of prompt engineering and its impact on improving Large Language Models for in-context learning, making it relevant to the study of prompt engineering." mengkaji keterampilan berpikir kritis siswa menggunakan model problem based learning berbantuan teknik probing prompting (pblpp),"['Vitoria Venisia Pereira', 'Achmad Samsudin', 'J. A. Utama']",https://ejournal.ummuba.ac.id/index.php/mp/article/download/1175/711,2023-06-03,,"Critical thinking skills are one of the important elements that must be possessed by students. In learning, critical thinking skills have been applied in learning environments to address student challenges in the 21st century. This study aims to explore students' critical thinking skills during the learning process using the Problem-Based Learning (PBL) model assisted by the probing prompting technique. The method used in this study was a literature review, by analyzing and synthesizing several Physics education journals published in 2017-2022 which discussed critical thinking skills, Problem-Based Learning (PBL) models, and probing prompting techniques with the research sample being students at elaboration of junior, high school, and college students who have different levels of understanding. Based on the results of the 40 journals in this study, it was found that the PBL model assisted by the probing prompting technique is an innovative learning model that can be applied in active learning to improve critical thinking skills, which is one of the dimensions in the Pancasila student profile. This is because this model displays various problems that occur in everyday life, through a series of questions as a guide so that it can stimulate students to seek and find solutions to a problem. The results of this study can be used as recommendations for teachers in classroom learning activities to improve students’ critical thinking skills.",25aa47738137fb2d72225b36f38198c7f092f727,Semantic Scholar,,highly relevant,"The paper discusses a new point-based prompts generation strategy for image segmentation, which aligns with the concept of prompt engineering." prompting techniques to increase the return rate of mailed questionnaires1,"['R. Winett', 'G. Stewart', 'J. S. Majors']",https://europepmc.org/articles/pmc1311322?pdf=render,1978-09-01,,"To increase the return rate of questionnaires mailed to clergy and physicians concerning their mental-health practices, different prompts were used after the questionnaire was received during four mail-outs to four randomly drawn samples of clergy and physicians. For each mail-out, the sample was divided into experimental (received prompt) and comparison (no prompt) groups, and one type of prompt or combination was used. Non-returnees of the questionnaire in the experimental group received either: (a) a single telephone call, (b) a memo, (c) a package (personal letter and new questionnaire) or package plus a telephone call, or (d) a double call. Comparison physicians and clergy were mailed only the original questionnaire. Relative to their respective comparison group's return rate, which averaged 22% across the four mail-outs (range 18% to 24%), the single call and package alone about doubled the overall return rate, the package and call increased the return rate about two-and-a-half fold, and the double call almost tripled the return rate. The memo was ineffective. A cost-effectiveness analysis indicated that the double-call procedure was less expensive than the single call, and much less expensive than the package alone or package with a call in securing returns. An analysis of the pattern of returns showed clearly that when prompts were not delivered (comparison groups), very few returns were received after about seven days from the initial mail-out. Most returns from prompts (experimental groups) were received by several days after the prompt. The results were seen as salient to the problem of reducing selection or volunteer bias in questionnaire studies and subsequent research demonstrating the effectiveness of telephone calls made about a week after distribution of surveys in securing high return rates was discussed.",25e5c32b71d4fa8ff9e5b92959d3df4f0c62f1a6,Semantic Scholar,,highly relevant,"The paper explicitly mentions the investigation of prompting methods and their application in machine translation using large language models, making it highly relevant to the topic of prompt engineering." mitigating political bias in large language models using chain of thought prompting techniques,['Hiresh Poosarla'],https://doi.org/10.22214/ijraset.2024.58057,2024-01-31,,"Abstract: Recent advancements in Natural Language Processing (NLP) have led to the proliferation of sophisticated chatbots, with ChatGPT as a prominent example. However, these Large Language Models are often plagued with inherent political biases from their training datasets, which raises concerns regarding their ethical usage and reinforcement of existing societal biases. This research introduces Chain of Thought (CoT) prompting, which is a novel approach to mitigate political biases by guiding chatbots to think step by step with a logical approach.",5360b9628b4ff04f02c2f7a88a1445fd60b17c6a,Semantic Scholar,,somewhat relevant,"The paper mentions 'mitigating such bias through prompt engineering', indicating it discusses using prompt engineering as a solution." ability of children to perform touchscreen gestures and follow prompting techniques when using mobile apps,"['Savita Yadav', 'P. Chakraborty', 'A. Kaul', 'Pooja', 'Bhavya Gupta', 'A. Garg']",https://www.e-cep.org/upload/pdf/cep-2019-00997.pdf,2020-02-05,,"Background Children today get access to smartphones at an early age. However, their ability to use mobile apps has not yet been studied in detail. Purpose This study aimed to assess the ability of children aged 2–8 years to perform touchscreen gestures and follow prompting techniques, i.e., ways apps provide instructions on how to use them. Methods We developed one mobile app to test the ability of children to perform various touchscreen gestures and another mobile app to test their ability to follow various prompting techniques. We used these apps in this study of 90 children in a kindergarten and a primary school in New Delhi in July 2019. We noted the touchscreen gestures that the children could perform and the most sophisticated prompting technique that they could follow. Results Two- and 3-year-old children could not follow any prompting technique and only a minority (27%) could tap the touchscreen at an intended place. Four- to 6-year-old children could perform simple gestures like a tap and slide (57%) and follow instructions provided through animation (63%). Seven- and 8-year-old children could perform more sophisticated gestures like dragging and dropping (30%) and follow instructions provided in audio and video formats (34%). We observed a significant difference between the number of touchscreen gestures that the children could perform and the number of prompting techniques that they could follow (F=544.0407, P<0.05). No significant difference was observed in the performance of female versus male children (P>0.05). Conclusion Children gradually learn to use mobile apps beginning at 2 years of age. They become comfortable performing single-finger gestures and following nontextual prompting techniques by 8 years of age. We recommend that these results be considered in the development of mobile apps for children.",54b109d38b7e23fffc0c817bd32a9129320db7a6,Semantic Scholar,,highly relevant,"The paper focuses on improving GPT models' performance in clinical named entity recognition tasks through the development of a task-specific prompt framework, directly involving prompt engineering to enhance model outcomes." phase separation within a thin layer of polymer solution as prompt technique to predict membrane morphology and transport properties,"['T. Anokhina', 'I. Borisov', 'A. Yushkin', 'G. Vaganov', 'A. Didenko', 'A. Volkov']",https://www.mdpi.com/2073-4360/12/12/2785/pdf?version=1607325074,2020-11-25,,"In this work, the precipitation of a thin layer of a polymer solution was proposed to imitate the process of asymmetric membrane formation by a non-solvent induced phase separation (NIPS) technique. The phase inversion within the thin (<500 μm) and bulk (~2 cm) layer of polyamic-acid (PAA) in N-methyl-2-pyrrolidone (NMP) by using water as non-solvent was considered. It was shown that polymer films formed within the “limited” layer of polymer solution showed a good agreement with the morphology of corresponded asymmetric flat-sheet membranes even in the case of three-component casting solution (PAA/NMP/EtOH). At the same time, the polymer films formed on the interface of two bulk phases (“infinite” regime) did not fully correspond to the membrane structure. It was shown that up to 50% of NMP solvent in PAA solution can be replaced by ethanol, which can have a renewable origin. By changing the ethanol content in the casting solution, the average size of transport pores can be varied in the range of 12–80 nm, and the liquid permeance from 16.6 up to 207 kg/m2∙h∙bar. To summarize, the precipitation of polymer solution within the thin layer can be considered a prompt technique and a powerful tool for fast screening and optimization of the complex composition of casting solutions using its small quantity. Furthermore, the prediction of membrane morphology can be done without casting the membrane, further post-treatment procedures, and scanning electron microscopy (SEM) analysis.",5bddd6b97cf6bf96636fc7215205830a9bc1af14,Semantic Scholar,,highly relevant,"The paper details a novel Verification-of-Choice approach for prompting engineering in the context of medical domain LLMs, making it directly relevant to prompt engineering." the acquisition of prepositional motor responses in handicapped children1,['M. Guralnick'],https://europepmc.org/articles/pmc1312026?pdf=render,1976-12-01,,"The acquisition of prepositional motor responses in three handicapped preschool children was analyzed for three pairs of prepositions. Generalization of prepositional knowledge at each stage of acquisition was assessed by a series of probe trials. In addition, an analysis of the control of prepositional responses when objects of the preposition (OP) and direct objects (DO) were relevant cues was conducted. The effect of this object-cue procedure, as well as a specially devised prompting technique on acquisition, was also determined. Results indicated substantial control by OPs whenever this cue was relevant, but this did not affect acquisition of prepositional concepts when these cues were eliminated. Analysis of the probe data and the prompting technique suggested various ways in which instructional programs for teaching prepositional knowledge to handicapped children could be constructed in a simple and efficient manner.",65979da3cec496db1250ba841d9e4bd2603c6255,Semantic Scholar,,highly relevant,"The paper focuses on learning prompts using only text data derived from LLMs for adapting vision-language models for downstream tasks, which is central to the theme of prompt engineering." behavioral community psychology encouraging lowincome parents to seek dental care for their children,"['M. Reiss', 'W. Piotrowski', 'J. Bailey']",https://europepmc.org/articles/pmc1312035?pdf=render,1976-12-01,,"The present study examined the effectiveness and cost efficiency of three different techniques to encourage low-income rural parents to seek dental care for their children. The families of 51 children who needed immediate dental care (determined by dental screening at a local school) were placed into three matched groups and randomly assigned to the treatment conditions: One Prompt (Note Only), Three Prompt (Note, Telephone Contact, Home Visit), and One Prompt plus $5 Incentive- The Three Prompt and One Prompt plus $5 Incentive were significantly more effective in initiating dental visits than the Note-Only procedure. Not only was the One Prompt plus $5 Incentive technique effective in producing a slightly larger percentage of initial dental visits compared to the Three-Prompt technique, it also produced a significantly larger number of followup visits. Furthermore, the cost-effectiveness analysis showed the Incentive condition to be less costly than the Three-Prompt condition in encouraging initial dental visits.",6cc13adfe09ea69429ce7e5d5d66c2ff0f420f8b,Semantic Scholar,,highly relevant,"The paper explicitly explores 'prompt engineering' as one of the approaches for leveraging GPT-3.5 in the context of code review automation, making it highly relevant to the topic of prompt engineering." the difference between mathematical reasoning ability improvement by learning with meta cognitive approach aided probing and prompting techniques in smp negeri 4 seisuka,"['Nadran Hamdani Siregar', 'Kms. M. AminFauzi']",http://www.scholink.org/ojs/index.php/wjer/article/download/766/781,2016-12-23,,"The purpose of this study were: (1) analyzed the differences in students’ mathematical reasoning ability improvement taught by metacognition approach aided probing technique (PMT-probing) and metacognition approach aided prompting technique (PMT-prompting); and (2) described the process of the students’ responses in solving mathematical reasoning abilities. This study was a quasi experimentalresearch. The population in this study were all students of class VIII SMP Negeri 4 SeiSuka, with a purposive sampling techniques, the obtained sample was VIII-1 and VIII-2. The research instrument used a test of mathematical reasoning ability, and had qualified the criteria of content validity, and reliability coefficient of 0.819. Anova two ways was used to analyze the difference of mathematical reasoning ability improvement, while descriptive analysis was used to analyze students’ answers process. The results showed that: (1) There were differences in students’ mathematical reasoning skills improvement which were taught by metacognition approach aided probing techniques and the students taught by prompting technical approach; and (2) The process of the students’ responses on students’ mathematical reasoning abilitythrough learning with metacognition approach aided by prompting techniques was better than metacognition approach aided by probing techniques.",a3488f44051d21a6c8bcdcb37feadab17fb134e5,Semantic Scholar,,highly relevant,"The study focuses on using ChatGPT with specific prompts for annotating tweets, which is directly related to utilizing prompt engineering techniques." focus difference in cue fading a new technique,['L. E. Acker'],https://europepmc.org/articles/pmc1338549?pdf=render,1969-03-01,,"Following the development of cue fading as an adjunct to discrimination training with lower animals (Terrace, 1963a, 1963b, 1963c, 1964, 1966) it seemed valuable to utilize cue-fading (prompting) techniques with children. Because educational institutions offer the major source of child subjects and because such institutions can provide only limited and temporary space, it seemed desirable to consider a relatively simple, portable, and flexibly programmed apparatus for presenting prompted stimulus materials. Such an apparatus should allow for: (1) presentation of visual stimuli which can be progressively varied along a prompting dimension; (2) adjustment, during training, of the number and size of prompt steps in a prompting progression; and, (3) repetition or interruption of any part of a prompting progression at any time during training. The latter two requirements can be met in one of two ways: (1) many series of stimuli must be prepared which progress through the prompting dimension in a wide variety of steps and step sizes (successful cue fading with children has been obtained under these conditions: Acker, 1966; Sidman and Stoddard, 1967; Touchette, 1968; Gollin and Savoy, 1968); or (2) the training apparatus must be capable of making these changes in the prompting progression independently of the stimuli (Moore and Goldiamond, 1964). The present system utilizes the latter method, thus providing a more immediate and economical means of adjusting the prompting progression to suit the immediate needs of the experiment. The present system utilizes a focus difference between two stimuli (S+ and S-) to prompt a correct choice of S+ [Israel (1960), successfully used variable blurring to prompt recall of the second member in a pairedassociate learning task]. Two Kodak Carousel, 35-mm projectors (model No. AV900) with remote control are used to present the stimulus. Using one projector for presenting Sand the other for S+, a focus difference (prompt) between the two stimuli can be created. For example, keeping S+ in focus it is possible to present a series of S-s which progresses from extremely out-offocus to in-focus over trials. The number and size of steps in such a progression is immediately adjustable",ef1740d0713a27b1b5a590f7bca7aa7dd56ecdc2,Semantic Scholar,,somewhat relevant,"The abstract mentions the use of prompts in a data synthesis framework for multi-hop question answering, indicating relevance to prompt engineering." "can generative artificial intelligence write an academic journal article opportunities, challenges, and implications",['Hsiao-Ping Hsu'],https://journal.ilta.ie/index.php/telji/article/download/152/151,2023-12-07,,"This article offers an in-depth reflection on the author’s experiences with Generative Artificial Intelligence (Gen AI), ChatGPT 4.0. The author started the journey from their initial need for software for English proofreading and editing services to their interest in exploring pre-service teachers’ application of Gen AI in lesson planning. Based on prompt engineering techniques, an iterative three-stage manuscript generation process—brainstorming, refinement, and writing—with ChatGPT is detailed. A short paper generated by ChatGPT is presented. Although Gen AI is a valuable tool in providing insights and assistance in research idea generation and design, academic writing, and English writing learning, the author cautions that critical thinking plays a vital role in ensuring accuracy, ethical considerations, and the preservation of rigorous scholarly standards. As Gen AI emerges as a game-changer in academia and education, this article highlights the importance of balancing its emerging capabilities with maintaining traditional academic and educational values.",6a5efa3f47b84a865d29a9c060b3f402e6b52597,Semantic Scholar,,highly relevant,"The paper discusses using generic and custom prompts for evaluating the performance of language models in text annotation tasks, which directly involves prompt engineering." excitements and concerns in the postchatgpt era deciphering public perception of ai through social media analysis,"['Weihong Qi', 'Jinsheng Pan', 'Hanjia Lyu', 'Jiebo Luo']",https://arxiv.org/pdf/2307.05809,2023-07-11,,"As AI systems become increasingly prevalent in various aspects of daily life, gaining a comprehensive understanding of public perception towards these AI systems has become increasingly essential for several reasons such as ethical considerations, user experience, fear, disinformation, regulation, collaboration, and co-creation. In this study, we investigate how mass social media users perceive the recent rise of AI frameworks such as ChatGPT. We collect a total of 33,912 comments in 388 unique subreddits spanning from November 30, 2022 to June 8, 2023 using a list of AI-related keywords. We employ BERTopic to uncover the major themes regarding AI on Reddit. Additionally, we seek to gain deeper insights into public opinion by examining the distribution of topics across different subreddits. We observe that technology-related subreddits predominantly focus on the technical aspects of AI models. On the other hand, non-tech subreddits show greater interest in social issues such as concerns about job replacement or furlough. We leverage zero-shot prompting to analyze the sentiment and perception of AI among individual users. Through a comprehensive sentiment and emotion analysis, we discover that tech-centric communities exhibit greater polarization compared to non-tech communities when discussing AI topics. This research contributes to our broader understanding of public opinion surrounding artificial intelligence.",0edb53377d6b95b969e055698b1b34e647e53916,Semantic Scholar,,somewhat relevant,"The paper discusses enhancing source sentences with task-specific instructions for NER, which aligns with the concept of prompt engineering." an automatically discovered chainofthought prompt generalizes to novel models and datasets,"['Konstantin Hebenstreit', 'Robert Praas', 'Louis P Kiesewetter', 'M. Samwald']",https://arxiv.org/pdf/2305.02897,2023-05-04,,"Emergent chain-of-thought (CoT) reasoning capabilities promise to improve performance and explainability of large language models (LLMs). However, uncertainties remain about how reasoning strategies formulated for previous model generations generalize to new model generations and different datasets. In this small-scale study, we compare different reasoning strategies induced by zero-shot prompting across six recently released LLMs (davinci-002, davinci-003, GPT-3.5-turbo, GPT-4, Flan-T5-xxl and Cohere command-xlarge) on a mixture of six question-answering datasets, including datasets from scientific and medical domains. Our findings demonstrate that while some variations in effectiveness occur, gains from CoT reasoning strategies remain robust across different models and datasets. GPT-4 has the most benefit from current state-of-the-art reasoning strategies and exhibits the best performance by applying a prompt previously discovered through automated discovery.",313d3a911d82b054aa47df0ffd7e4c3b4bd5407f,Semantic Scholar,,highly relevant,"The paper describes using instructional note and rubrics to prompt GPT-4V for scoring, indicating a direct application of prompt engineering." searching for needles in a haystack on the role of incidental bilingualism in palm’s translation capability,"['Eleftheria Briakou', 'Colin Cherry', 'George F. Foster']",http://arxiv.org/pdf/2305.10266,2023-05-17,,"Large, multilingual language models exhibit surprisingly good zero- or few-shot machine translation capabilities, despite having never seen the intentionally-included translation examples provided to typical neural translation systems. We investigate the role of incidental bilingualism—the unintentional consumption of bilingual signals, including translation examples—in explaining the translation capabilities of large language models, taking the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method approach to measure and understand incidental bilingualism at scale. We show that PaLM is exposed to over 30 million translation pairs across at least 44 languages. Furthermore, the amount of incidental bilingual content is highly correlated with the amount of monolingual in-language content for non-English languages. We relate incidental bilingual content to zero-shot prompts and show that it can be used to mine new prompts to improve PaLM’s out-of-English zero-shot translation quality. Finally, in a series of small-scale ablations, we show that its presence has a substantial impact on translation capabilities, although this impact diminishes with model scale.",3739242e1027c2d5e5f7f1cbe5f37072670badfc,Semantic Scholar,,somewhat relevant,"The paper mentions the use of 'prompt design' as part of an end-to-end framework for earthquake-induced human loss forecasting, indicating relevance to hard prefix prompting." automated evaluation of classroom instructional support with llms and bows connecting global predictions to specific feedback,"['Jacob Whitehill', 'Jennifer LoCasale-Crouch']",https://arxiv.org/pdf/2310.01132,2023-10-02,,"With the aim to provide teachers with more specific, frequent, and actionable feedback about their teaching, we explore how Large Language Models (LLMs) can be used to estimate ``Instructional Support'' domain scores of the CLassroom Assessment Scoring System (CLASS), a widely used observation protocol. We design a machine learning architecture that uses either zero-shot prompting of Meta's Llama2, and/or a classic Bag of Words (BoW) model, to classify individual utterances of teachers' speech (transcribed automatically using OpenAI's Whisper) for the presence of Instructional Support. Then, these utterance-level judgments are aggregated over an entire 15-min observation session to estimate a global CLASS score. Experiments on two CLASS-coded datasets of toddler and pre-kindergarten classrooms indicate that (1) automatic CLASS Instructional Support estimation accuracy using the proposed method (Pearson $R$ up to $0.47$) approaches human inter-rater reliability (up to $R=0.55$); (2) LLMs yield slightly greater accuracy than BoW for this task, though the best models often combined features extracted from both LLM and BoW; and (3) for classifying individual utterances, there is still room for improvement of automated methods compared to human-level judgments. Finally, (4) we illustrate how the model's outputs can be visualized at the utterance level to provide teachers with explainable feedback on which utterances were most positively or negatively correlated with specific CLASS dimensions.",5dccc306d316edb5d0ec5d8399c4113c5bd36c27,Semantic Scholar,,highly relevant,"The paper focuses on using weakly supervised prompt learning to generate medical prompts automatically for medical image classification, which is directly related to the topic of prompt engineering." probing power by prompting harnessing pretrained language models for power connotation framing,"['Shima Khanehzar', 'Trevor Cohn', 'Gosia Mikołajczak', 'Lea Frermann']",https://aclanthology.org/2023.eacl-main.61.pdf,,,"When describing actions, subtle changes in word choice can evoke very different associations with the involved entities. For instance, a company ‘{{it employing} workers’ evokes a more positive connotation than the one ‘{{it exploiting}’ them. This concept is called {{it connotation}. This paper investigates whether pre-trained language models (PLMs) encode such subtle connotative information about {{it power differentials} between involved entities. We design a probing framework for power connotation, building on~{citet{sap-etal-2017-connotation}’s operationalization of {{it connotation frames}. We show that zero-shot prompting of PLMs leads to above chance prediction of power connotation, however fine-tuning PLMs using our framework drastically improves their accuracy. Using our fine-tuned models, we present a case study of {{it power dynamics} in US news reporting on immigration, showing the potential of our framework as a tool for understanding subtle bias in the media.",60c11f02982bcfe1f8be25c87c82606aeef9758b,Semantic Scholar,,highly relevant,"The paper discusses using learnable prompts in conjunction with large pre-trained models for class-incremental learning, which directly relates to the topic of prompt engineering." broken neural scaling laws,"['Ethan Caballero', 'Kshitij Gupta', 'I. Rish', 'David Krueger']",https://arxiv.org/pdf/2210.14891,2022-10-26,,"We present a smoothly broken power law functional form (that we refer to as a Broken Neural Scaling Law (BNSL)) that accurately models&extrapolates the scaling behaviors of deep neural networks (i.e. how the evaluation metric of interest varies as amount of compute used for training (or inference), number of model parameters, training dataset size, model input size, number of training steps, or upstream performance varies) for various architectures&for each of various tasks within a large&diverse set of upstream&downstream tasks, in zero-shot, prompted,&finetuned settings. This set includes large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, OOD detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, fairness, molecules, computer programming/coding, math word problems,""emergent phase transitions"", arithmetic, supervised learning, unsupervised/self-supervised learning,&reinforcement learning (single agent&multi-agent). When compared to other functional forms for neural scaling, this functional form yields extrapolations of scaling behavior that are considerably more accurate on this set. Moreover, this functional form accurately models&extrapolates scaling behavior that other functional forms are incapable of expressing such as the nonmonotonic transitions present in the scaling behavior of phenomena such as double descent&the delayed, sharp inflection points present in the scaling behavior of tasks such as arithmetic. Lastly, we use this functional form to glean insights about the limit of the predictability of scaling behavior. Code is available at https://github.com/ethancaballero/broken_neural_scaling_laws",61f329722cd94291898c2c8131606a55f7a07219,Semantic Scholar,,highly relevant,"The paper proposes Token-Level Prompt Decomposition (ToPro) for token-level sequence labeling tasks, aligning closely with the study of prompt-based methods, specifically in areas like NER and POS tagging." gpt as knowledge worker a zeroshot evaluation of (ai)cpa capabilities,"['Jillian Bommarito', 'M. Bommarito', 'D. Katz', 'Jessica Katz']",https://arxiv.org/pdf/2301.04408,2023-01-11,,"The global economy is increasingly dependent on knowledge workers to meet the needs of public and private organizations. While there is no single definition of knowledge work, organizations and industry groups still attempt to measure individuals' capability to engage in it. The most comprehensive assessment of capability readiness for professional knowledge workers is the Uniform CPA Examination developed by the American Institute of Certified Public Accountants (AICPA). In this paper, we experimentally evaluate OpenAI's `text-davinci-003` and prior versions of GPT on both a sample Regulation (REG) exam and an assessment of over 200 multiple-choice questions based on the AICPA Blueprints for legal, financial, accounting, technology, and ethical tasks. First, we find that `text-davinci-003` achieves a correct rate of 14.4% on a sample REG exam section, significantly underperforming human capabilities on quantitative reasoning in zero-shot prompts. Second, `text-davinci-003` appears to be approaching human-level performance on the Remembering&Understanding and Application skill levels in the Exam absent calculation. For best prompt and parameters, the model answers 57.6% of questions correctly, significantly better than the 25% guessing rate, and its top two answers are correct 82.1% of the time, indicating strong non-entailment. Finally, we find that recent generations of GPT-3 demonstrate material improvements on this assessment, rising from 30% for `text-davinci-001` to 57% for `text-davinci-003`. These findings strongly suggest that large language models have the potential to transform the quality and efficiency of future knowledge work.",651dac86d8bf847ec6780a878cb1e04d3d41f356,Semantic Scholar,,somewhat relevant,"The paper mentions utilizing a 'prompt based method using ChatGPT' for sentiment reduction in news content, indicating its application of prompt engineering." llms4ol large language models for ontology learning,"['Hamed Babaei Giglou', 'J. D’Souza', 'S. Auer']",https://arxiv.org/pdf/2307.16648,2023-07-31,,"We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Our LLMs4OL paradigm investigates the following hypothesis: \textit{Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?} To test this hypothesis, we conduct a comprehensive evaluation using the zero-shot prompting method. We evaluate nine different LLM model families for three main OL tasks: term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.",78b1601c013769294a1927d43e50dfa81d6af75f,Semantic Scholar,,highly relevant,"The article explicitly discusses utilizing generated prompts for zero-shot hypernym prediction with LLMs, indicating a focus on prompt engineering." zeroshot information extraction for clinical metaanalysis using large language models,"['David Kartchner', 'Selvi Ramalingam', 'Irfan Al-Hussaini', 'Olivia Kronick', 'Cassie S. Mitchell']",https://aclanthology.org/2023.bionlp-1.37.pdf,,,"Meta-analysis of randomized clinical trials (RCTs) plays a crucial role in evidence-based medicine but can be labor-intensive and error-prone. This study explores the use of large language models to enhance the efficiency of aggregating results from randomized clinical trials (RCTs) at scale. We perform a detailed comparison of the performance of these models in zero-shot prompt-based information extraction from a diverse set of RCTs to traditional manual annotation methods. We analyze the results for two different meta-analyses aimed at drug repurposing in cancer therapy pharmacovigilience in chronic myeloid leukemia. Our findings reveal that the best model for the two demonstrated tasks, ChatGPT can generally extract correct information and identify when the desired information is missing from an article. We additionally conduct a systematic error analysis, documenting the prevalence of diverse error types encountered during the process of prompt-based information extraction.",828dbdab5791d8539a7f90063d168b9258083326,Semantic Scholar,,somewhat relevant,"The paper discusses comparing modular and prompting-based methods in vision-language tasks, indicating relevance to prompt engineering." "zeroprompt scaling promptbased pretraining to 1, 000 tasks improves zeroshot generalization","['Hanwei Xu', 'Yujun Chen', 'Yulun Du', 'Nan Shao', 'Yanggang Wang', 'Haiyu Li', 'Zhilin Yang']",https://aclanthology.org/2022.findings-emnlp.312.pdf,2022-01-18,,"We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on task scaling and zero-shot prompting. While previous models are trained on only a few dozen tasks, we scale to 1,000 tasks for the first time using real-world data. This leads to a crucial discovery that task scaling can be an efficient alternative to model scaling; i.e., the model size has little impact on performance with an extremely large number of tasks. Our results show that task scaling can substantially improve training efficiency by 30 times in FLOPs. Moreover, we present a prompting method that incorporates a genetic algorithm to automatically search for the best prompt for unseen tasks, along with a few other improvements. Empirically, ZeroPrompt substantially improves both the efficiency and the performance of zero-shot learning across a variety of academic and production datasets.",842104ef0575823498f26cdd57b4b4dba655df9e,Semantic Scholar,,somewhat relevant,"The abstract mentions the use of 'prompt-based approaches' for identifying jargon, highlighting an application of prompt engineering." welfare diplomacy benchmarking language model cooperation,"['Gabriel Mukobi', 'Hannah Erlebach', 'Niklas Lauffer', 'Lewis Hammond', 'Alan Chan', 'Jesse Clifton']",https://arxiv.org/pdf/2310.08901,2023-10-13,,"The growing capabilities and increasingly widespread deployment of AI systems necessitate robust benchmarks for measuring their cooperative capabilities. Unfortunately, most multi-agent benchmarks are either zero-sum or purely cooperative, providing limited opportunities for such measurements. We introduce a general-sum variant of the zero-sum board game Diplomacy -- called Welfare Diplomacy -- in which players must balance investing in military conquest and domestic welfare. We argue that Welfare Diplomacy facilitates both a clearer assessment of and stronger training incentives for cooperative capabilities. Our contributions are: (1) proposing the Welfare Diplomacy rules and implementing them via an open-source Diplomacy engine; (2) constructing baseline agents using zero-shot prompted language models; and (3) conducting experiments where we find that baselines using state-of-the-art models attain high social welfare but are exploitable. Our work aims to promote societal safety by aiding researchers in developing and assessing multi-agent AI systems. Code to evaluate Welfare Diplomacy and reproduce our experiments is available at https://github.com/mukobi/welfare-diplomacy.",8460e51e6231c4573302ebd10ca765322fc1e3c3,Semantic Scholar,,highly relevant,"The paper focuses on prompt learning as a technique for improving the generalization capability of Vision-Language Models (VLMs) using synthetic data, which directly relates to the utilization and engineering of prompts." leveraging contextual information for effective entity salience detection,"['Rajarshi Bhowmik', 'Marco Ponza', 'Atharva Tendle', 'Anant Gupta', 'Rebecca Jiang', 'Xingyu Lu', 'Qian Zhao', 'Daniel Preotiuc-Pietro']",https://arxiv.org/pdf/2309.07990,2023-09-14,,"In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document. These entities, often deemed as salient entities, provide useful cues of the aboutness of a document to a reader. Identifying the salience of entities was found helpful in several downstream applications such as search, ranking, and entity-centric summarization, among others. Prior work on salient entity detection mainly focused on machine learning models that require heavy feature engineering. We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches. To this end, we conduct a comprehensive benchmarking of four publicly available datasets using models representative of the medium-sized pre-trained language model family. Additionally, we show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task's uniqueness and complexity.",8a655a1b1deac0ba8792c4538b69f828983e363a,Semantic Scholar,,highly relevant,"The paper explores the use of prompt-based methods with large language models like ChatGPT for zero-shot keyphrase extraction, directly related to the topic of prompt engineering." auditing gender analyzers on text data,"['Siddharth D. Jaiswal', 'Ankit Kumar Verma', 'Animesh Mukherjee']",https://arxiv.org/pdf/2310.06061,2023-10-09,,"AI models have become extremely popular and accessible to the general public. However, they are continuously under the scanner due to their demonstrable biases toward various sections of the society like people of color and non-binary people. In this study, we audit three existing gender analyzers -- uClassify, Readable and HackerFactor, for biases against non-binary individuals. These tools are designed to predict only the cisgender binary labels, which leads to discrimination against non-binary members of the society. We curate two datasets -- Reddit comments (660k) and, Tumblr posts (2.05M) and our experimental evaluation shows that the tools are highly inaccurate with the overall accuracy being ~50% on all platforms. Predictions for non-binary comments on all platforms are mostly female, thus propagating the societal bias that non-binary individuals are effeminate. To address this, we fine-tune a BERT multi-label classifier on the two datasets in multiple combinations, observe an overall performance of ~77% on the most realistically deployable setting and a surprisingly higher performance of 90% for the non-binary class. We also audit ChatGPT using zero-shot prompts on a small dataset (due to high pricing) and observe an average accuracy of 58% for Reddit and Tumblr combined (with overall better results for Reddit). Thus, we show that existing systems, including highly advanced ones like ChatGPT are biased, and need better audits and moderation and, that such societal biases can be addressed and alleviated through simple off-the-shelf models like BERT trained on more gender inclusive datasets.",8e80592e469dd7f3391864a227271c8f95741f6b,Semantic Scholar,,highly relevant,"The paper discusses the use of prompt-based methods for constructing zero- and few-shot label predictors, making it highly relevant to prompt engineering." pieclass weaklysupervised text classification with prompting and noiserobust iterative ensemble training,"['Yunyi Zhang', 'Minhao Jiang', 'Yu Meng', 'Yu Zhang', 'Jiawei Han']",https://aclanthology.org/2023.emnlp-main.780.pdf,2023-05-23,,"Weakly-supervised text classification trains a classifier using the label name of each target class as the only supervision, which largely reduces human annotation efforts. Most existing methods first use the label names as static keyword-based features to generate pseudo labels, which are then used for final classifier training. While reasonable, such a commonly adopted framework suffers from two limitations: (1) keywords can have different meanings in different contexts and some text may not have any keyword, so keyword matching can induce noisy and inadequate pseudo labels; (2) the errors made in the pseudo label generation stage will directly propagate to the classifier training stage without a chance of being corrected. In this paper, we propose a new method, PIEClass, consisting of two modules: (1) a pseudo label acquisition module that uses zero-shot prompting of pre-trained language models (PLM) to get pseudo labels based on contextualized text understanding beyond static keyword matching, and (2) a noise-robust iterative ensemble training module that iteratively trains classifiers and updates pseudo labels by utilizing two PLM fine-tuning methods that regularize each other. Extensive experiments show that PIEClass achieves overall better performance than existing strong baselines on seven benchmark datasets and even achieves similar performance to fully-supervised classifiers on sentiment classification tasks.",a5960c6674f26118e1e81b95d5c2482dce159bfb,Semantic Scholar,,highly relevant,The paper's focus on using prompt-tuning with pre-trained language models for relation extraction highlights its direct relevance to hard prefix prompting and prompt engineering. federated prompting and chainofthought reasoning for improving llms answering,"['Xiangyang Liu', 'Tianqi Pang', 'Chenyou Fan']",http://arxiv.org/pdf/2304.13911,2023-04-27,,"We investigate how to enhance answer precision in frequently asked questions posed by distributed users using cloud-based Large Language Models (LLMs). Our study focuses on a typical situations where users ask similar queries that involve identical mathematical reasoning steps and problem-solving procedures. Due to the unsatisfactory accuracy of LLMs' zero-shot prompting with standalone questions, we propose to improve the distributed synonymous questions using Self-Consistency (SC) and Chain-of-Thought (CoT) techniques. Specifically, we first retrieve synonymous questions from a crowd-sourced database and create a federated question pool. We call these federated synonymous questions with the same or different parameters SP-questions or DP-questions, respectively. We refer to our methods as Fed-SP-SC and Fed-DP-CoT, which can generate significantly more accurate answers for all user queries without requiring sophisticated model-tuning. Through extensive experiments, we demonstrate that our proposed methods can significantly enhance question accuracy by fully exploring the synonymous nature of the questions and the consistency of the answers.",a7c0d9bf44045c9d4c41e329e2a87df0ae7e0af6,Semantic Scholar,,somewhat relevant,"The paper mentions 'prompt-based methods' as one of the few-shot adaptation methods for multi-modal models, making it related to prompt engineering." contextaware robust finetuning,"['Xiaofeng Mao', 'YueFeng Chen', 'Xiaojun Jia', 'Rong Zhang', 'Hui Xue', 'Zhao Li']",https://arxiv.org/pdf/2211.16175,2022-11-29,,"Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to""[CLASS]""by using similarity between the image and the prompt sentence""a [CONTEXT] of [CLASS]"". Based on exhaustive text cues in""[CONTEXT]"", CLIP model is aware of different contexts, e.g. background, style, viewpoint, and exhibits unprecedented robustness against a wide range of distribution shifts. However, recent works find further fine-tuning of CLIP models improves accuracy but sacrifices the robustness on downstream tasks. We conduct an empirical investigation to show fine-tuning will corrupt the context-aware ability of pre-trained CLIP features. To solve this problem, we propose Context-Aware Robust Fine-tuning (CAR-FT). CAR-FT regularizes the model during fine-tuning to capture the context information. Specifically, we use zero-shot prompt weights to get the context distribution contained in the image. By minimizing the Kullback-Leibler Divergence (KLD) between context distributions induced by original/fine-tuned CLIP models, CAR-FT makes the context-aware ability of CLIP inherited into downstream tasks, and achieves both higher In-Distribution (ID) and Out-Of-Distribution (OOD) accuracy. The experimental results show CAR-FT achieves superior robustness on five OOD test datasets of ImageNet, and meanwhile brings accuracy gains on nine downstream tasks. Additionally, CAR-FT surpasses previous Domain Generalization (DG) methods and gets 78.5% averaged accuracy on DomainBed benchmark, building the new state-of-the-art.",adb89ea270e47809d3341679a2d8fe2900a4bf97,Semantic Scholar,,highly relevant,"The paper is focused on the use of hierarchical prompts for continual learning, clearly building upon prompt engineering to improve learning retention." instance needs more care rewriting prompts for instances yields better zeroshot performance,"['Saurabh Srivastava', 'Chengyue Huang', 'Weiguo Fan', 'Ziyu Yao']",https://arxiv.org/pdf/2310.02107,2023-10-03,,"Enabling large language models (LLMs) to perform tasks in zero-shot has been an appealing goal owing to its labor-saving (i.e., requiring no task-specific annotations); as such, zero-shot prompting approaches also enjoy better task generalizability. To improve LLMs' zero-shot performance, prior work has focused on devising more effective task instructions (e.g., ``let's think step by step'' ). However, we argue that, in order for an LLM to solve them correctly in zero-shot, individual test instances need more carefully designed and customized instructions. To this end, we propose PRoMPTd, an approach that rewrites the task prompt for each individual test input to be more specific, unambiguous, and complete, so as to provide better guidance to the task LLM. We evaluated PRoMPTd on eight datasets covering tasks including arithmetics, logical reasoning, and code generation, using GPT-4 as the task LLM. Notably, PRoMPTd achieves an absolute improvement of around 10% on the complex MATH dataset and 5% on the code generation task on HumanEval, outperforming conventional zero-shot methods. In addition, we also showed that the rewritten prompt can provide better interpretability of how the LLM resolves each test instance, which can potentially be leveraged as a defense mechanism against adversarial prompting. The source code and dataset can be obtained from https://github.com/salokr/PRoMPTd",b97074e2f1407b349c0abbb8c689a23c02d1924d,Semantic Scholar,,highly relevant,"The paper employs a prompt-based method for measuring implicit bias in LLMs, which aligns directly with the topic of prompt engineering." text style transfer evaluation using large language models,"['Phil Ostheimer', 'M. Nagda', 'Marius Kloft', 'Sophie Fellenz']",https://arxiv.org/pdf/2308.13577,2023-08-25,,"Evaluating Text Style Transfer (TST) is a complex task due to its multifaceted nature. The quality of the generated text is measured based on challenging factors, such as style transfer accuracy, content preservation, and overall fluency. While human evaluation is considered to be the gold standard in TST assessment, it is costly and often hard to reproduce. Therefore, automated metrics are prevalent in these domains. Nevertheless, it remains unclear whether these automated metrics correlate with human evaluations. Recent strides in Large Language Models (LLMs) have showcased their capacity to match and even exceed average human performance across diverse, unseen tasks. This suggests that LLMs could be a feasible alternative to human evaluation and other automated metrics in TST evaluation. We compare the results of different LLMs in TST using multiple input prompts. Our findings highlight a strong correlation between (even zero-shot) prompting and human evaluation, showing that LLMs often outperform traditional automated metrics. Furthermore, we introduce the concept of prompt ensembling, demonstrating its ability to enhance the robustness of TST evaluation. This research contributes to the ongoing evaluation of LLMs in diverse tasks, offering insights into successful outcomes and areas of limitation.",dfffba50d7630f1e68d9cc67d4a9a1c6519b93cd,Semantic Scholar,,highly relevant,"The paper focuses on Meta Prompting, an approach that includes self-generating new prompts for LLMs, directly associated with the concept of prompt engineering." erniecode beyond englishcentric crosslingual pretraining for programming languages,"['Yekun Chai', 'Shuohuan Wang', 'Chao Pang', 'Yu Sun', 'Hao Tian', 'Hua Wu']",http://arxiv.org/pdf/2212.06742,2022-12-13,,"Software engineers working with the same programming language (PL) may speak different natural languages (NLs) and vice versa, erecting huge barriers to communication and working efficiency. Recent studies have demonstrated the effectiveness of generative pre-training in computer programs, yet they are always English-centric. In this work, we step towards bridging the gap between multilingual NLs and multilingual PLs for large language models (LLMs). We release ERNIE-Code, a unified pre-trained language model for 116 NLs and 6 PLs. We employ two methods for universal cross-lingual pre-training: span-corruption language modeling that learns patterns from monolingual NL or PL; and pivot-based translation language modeling that relies on parallel data of many NLs and PLs. Extensive results show that ERNIE-Code outperforms previous multilingual LLMs for PL or NL across a wide range of end tasks of code intelligence, including multilingual code-to-text, text-to-code, code-to-code, and text-to-text generation. We further show its advantage of zero-shot prompting on multilingual code summarization and text-to-text translation. We release our code and pre-trained checkpoints.",e1b732e02cd6f41e4e1eb793ec4b356cee2587f1,Semantic Scholar,,somewhat relevant,"The paper focuses on using GPT-3.5 for generating visualization specifications from natural language, utilizing both zero-shot and few-shot prompt strategies, indicating an application of prompt engineering." debiased finetuning for visionlanguage models by prompt regularization,"['Beier Zhu', 'Yulei Niu', 'Saeil Lee', 'Minhoe Hur', 'Hanwang Zhang']",http://arxiv.org/pdf/2301.12429,2023-01-29,,"We present a new paradigm for fine-tuning large-scale vision-language pre-trained models on downstream task, dubbed Prompt Regularization (ProReg). Different from traditional fine-tuning which easily overfits to the downstream task data, ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning. The motivation is: by prompting the large model “a photo of a [CLASS]”, the fill-in answer is only dependent on the pretraining encyclopedic knowledge while independent of the task data distribution, which is usually biased. Specifically, given a training sample prediction during fine-tuning, we first calculate its Kullback-Leibler loss of the prompt prediction and Cross-Entropy loss of the ground-truth label, and then combine them with a proposed sample-wise adaptive trade- off weight, which automatically adjusts the transfer between the pretrained and downstream domains. On various out-of-distribution benchmarks, we show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.",e8b73abefd998229f35e810f465854bdea7512f8,Semantic Scholar,,somewhat relevant,"The paper focuses on readability assessment of multilingual models and mentions the use of few-shot prompting settings, indicating its relevance to prompt engineering." offenseval 2023 offensive language identification in the age of large language models,"['Marcos Zampieri', 'Sara Rosenthal', 'Preslav Nakov', 'A. Dmonte', 'Tharindu Ranasinghe']",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/2605A4C9E45354D36C0B732B49DB8CA3/S1351324923000517a.pdf/div-class-title-offenseval-2023-offensive-language-identification-in-the-age-of-large-language-models-div.pdf,2023-11-01,,"Abstract The OffensEval shared tasks organized as part of SemEval-2019–2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.",f1cdf45ac76c7f9d129bcc7ef839f5ec0b3c7b82,Semantic Scholar,,highly relevant,"The abstract mentions the use of 'few-shot prompts for various tasks automatically,' which aligns with the concept of prompt engineering, specifically focusing on the generation and optimization of prompts." bits of grass does gpt already know how to write like whitman,"['Piotr Sawicki', 'M. Grzes', 'Fabrício Góes', 'Daniel Brown', 'Max Peeperkorn', 'Aisha Khatun']",http://arxiv.org/pdf/2305.11064,2023-05-10,,"This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4 models to generate poems in the style of specific authors using zero-shot and many-shot prompts (which use the maximum context length of 8192 tokens). We assess the performance of models that are not fine-tuned for generating poetry in the style of specific authors, via automated evaluation. Our findings indicate that without fine-tuning, even when provided with the maximum number of 17 poem examples (8192 tokens) in the prompt, these models do not generate poetry in the desired style.",0fb6ce7f5d73d7121ff7c36488f070d41e3779a5,Semantic Scholar,,highly relevant,"The paper's focus on using zero-shot prompts to guide an LLM for complex numerical reasoning over financial documents demonstrates a direct application of prompt engineering techniques, particularly relevant to the study of hard prefix prompts." fewshot incontext learning on knowledge base question answering,"['Tianle Li', 'Xueguang Ma', 'Alex Zhuang', 'Yu Gu', 'Yu Su', 'Wenhu Chen']",http://arxiv.org/pdf/2305.01750,2023-05-02,,"Question answering over knowledge bases is considered a difficult problem due to the challenge of generalizing to a wide variety of possible natural language questions. Additionally, the heterogeneity of knowledge base schema items between different knowledge bases often necessitates specialized training for different knowledge base question-answering (KBQA) datasets. To handle questions over diverse KBQA datasets with a unified training-free framework, we propose KB-BINDER, which for the first time enables few-shot in-context learning over KBQA tasks. Firstly, KB-BINDER leverages large language models like Codex to generate logical forms as the draft for a specific question by imitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledge base to bind the generated draft to an executable one with BM25 score matching. The experimental results on four public heterogeneous KBQA datasets show that KB-BINDER can achieve a strong performance with only a few in-context demonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can even outperform the state-of-the-art trained models. On GrailQA and WebQSP, our model is also on par with other fully-trained models. We believe KB-BINDER can serve as an important baseline for future research. We plan to release all the code and data. Our code is available at https://github.com/ltl3A87/KB-BINDER.",0139e689add40a61c9454674edac4e93702aa5fc,Semantic Scholar,,highly relevant,"The paper describes using a few-shot prompting strategy with large language models for generating executable metamorphic relations, which directly involves prompt engineering." parallel context windows improve incontext learning of large language models,"['Nir Ratner', 'Yoav Levine', 'Yonatan Belinkov', 'Ori Ram', 'Omri Abend', 'Ehud D. Karpas', 'A. Shashua', 'Kevin Leyton-Brown', 'Y. Shoham']",https://arxiv.org/pdf/2212.10947,,,"For applications that require processing large amounts of text at inference time, Large Language Models (LLMs) are handicapped by their limited context windows, which are typically 2048 tokens. In-context learning, an emergent phenomenon in LLMs in sizes above a certain parameter threshold, constitutes one significant example because it can only leverage training examples that fit into the context window. Existing efforts to address the context window limitation involve training specialized architectures, which tend to be smaller than the sizes in which in-context learning manifests due to the memory footprint of processing long texts. We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training . The key to the approach is to carve a long context into chunks (“windows”) that fit within the architecture, re-strict the attention mechanism to apply only within each window, and re-use the positional embeddings among the windows. We test the PCW approach on in-context learning with models that range in size between 750 million and 178 billion parameters, and show substan-tial improvements for tasks with diverse input and output spaces. Our results motivate further investigation of Parallel Context Windows as a method for applying off-the-shelf LLMs in other settings that require long text sequences.",0eedbc38bc215fdbe4e5bcde8aeac08fb3ce9f44,Semantic Scholar,,somewhat relevant,The use of a one-shot prompt with the ChatGPT 3.5 API indicates the paper involves prompt engineering but is more focused on improving complex table queries in RAG systems. kicgpt large language model with knowledge in context for knowledge graph completion,"['Yanbin Wei', 'Qiushi Huang', 'James T. Kwok', 'Yu Zhang']",https://aclanthology.org/2023.findings-emnlp.580.pdf,2024-02-04,,"Knowledge Graph Completion (KGC) is crucial for addressing knowledge graph incompleteness and supporting downstream applications. Many models have been proposed for KGC. They can be categorized into two main classes: triple-based and text-based approaches. Triple-based methods struggle with long-tail entities due to limited structural information and imbalanced entity distributions. Text-based methods alleviate this issue but require costly training for language models and specific finetuning for knowledge graphs, which limits their efficiency. To alleviate these limitations, in this paper, we propose KICGPT, a framework that integrates a large language model (LLM) and a triple-based KGC retriever. It alleviates the long-tail problem without incurring additional training overhead. KICGPT uses an in-context learning strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide the LLM. Empirical results on benchmark datasets demonstrate the effectiveness of KICGPT with smaller training overhead and no finetuning.",13c0f33ccd88607fce4819135e404a988aa8aad4,Semantic Scholar,,highly relevant,"The paper discusses various prompting techniques to guide the outputs of Large Language Models, directly relevant to the topic of prompt engineering." enhancing chinese address parsing in lowresource scenarios through incontext learning,"['Guangming Ling', 'Xiaofeng Mu', 'Chao Wang', 'Aiping Xu']",https://www.mdpi.com/2220-9964/12/7/296/pdf?version=1690185528,2023-07-22,,"Address parsing is a crucial task in natural language processing, particularly for Chinese addresses. The complex structure and semantic features of Chinese addresses present challenges due to their inherent ambiguity. Additionally, different task scenarios require varying levels of granularity in address components, further complicating the parsing process. To address these challenges and adapt to low-resource environments, we propose CapICL, a novel Chinese address parsing model based on the In-Context Learning (ICL) framework. CapICL leverages a sequence generator, regular expression matching, BERT semantic similarity computation, and Generative Pre-trained Transformer (GPT) modeling to enhance parsing accuracy by incorporating contextual information. We construct the sequence generator using a small annotated dataset, capturing distribution patterns and boundary features of address types to model address structure and semantics, which mitigates interference from unnecessary variations. We introduce the REB–KNN algorithm, which selects similar samples for ICL-based parsing using regular expression matching and BERT semantic similarity computation. The selected samples, raw text, and explanatory text are combined to form prompts and inputted into the GPT model for prediction and address parsing. Experimental results demonstrate significant achievements of CapICL in low-resource environments, reducing dependency on annotated data and computational resources. Our model’s effectiveness, adaptability, and broad application potential are validated, showcasing its positive impact in natural language processing and geographical information systems.",1a3715b07636c8396e9d722057c5b052cbf03920,Semantic Scholar,,highly relevant,"The paper introduces a visual prompting technique for enhancing GPT-4V's performance on 3D spatial tasks, which is a direct application of prompt engineering." pillow enhancing efficient instruction finetuning via prompt matching,"['Zhenting Qi', 'Xiaoyu Tan', 'Shaojie Shi', 'Chao Qu', 'Yinghui Xu', 'Yuan Qi']",https://aclanthology.org/2023.emnlp-industry.45.pdf,2023-12-09,,"Instruction fine-tuning has conventionally been employed to adapt Large Language Models (LLMs) to a variety of tasks. Nonetheless, this technique often necessitates substantial computational resources, making it impractical for deployment by individuals or small-scale entities. Recently, Low-Rank Adaptation (LoRA) has become a promising alternative, offering high capabilities on par with full tuning with reduced resource overhead. However, attaining satisfactory performance through the fine-tuning of LoRA is a non-trivial challenge. In this paper, we propose PILLOW, which aims to improve LoRA's performance by a discrimination-based prompting method, leveraging LLMs' In-Context Learning ability. PILLOW incorporates a matching network that selects prompts from a user-defined prompt pool, concatenates the selected prompts with the user instruction as input, and performs inference using the LoRA-fine-tuned LLMs. Trained with Reinforcement Learning, PILLOW exhibits commensurate performance on various evaluation metrics compared with typical instruction fine-tuning methods, utilizing only consumer-grade GPU resources and exhibiting a large reduction in computational costs.",1c6a5d033743f345447e45e1eb6d6c7cadee9f78,Semantic Scholar,,highly relevant,"The paper uses the Chain of Thought (CoT) prompting technique, directly relating to prompt engineering." selfprompting large language models for opendomain qa,"['Junlong Li', 'Zhuosheng Zhang', 'Hai Zhao']",http://arxiv.org/pdf/2212.08635,,,"Open-Domain Question Answering (ODQA) requires models to answer factoid questions with no context given. The common way for this task is to train models on a large-scale annotated dataset to retrieve related documents and generate answers based on these documents. In this paper, we show that the ODQA architecture can be dramatically simplified by treating Large Language Models (LLMs) as a knowledge corpus and propose a Self-Prompting framework for LLMs to perform ODQA so as to eliminate the need for training data and external knowledge corpus. Concretely, we firstly generate multiple pseudo QA pairs with background passages and one-sentence explanations for these QAs by prompting LLMs step by step and then leverage the generated QA pairs for in-context learning. Experimental results show our method surpasses previous state-of-the-art methods by +8.8 EM averagely on three widely-used ODQA datasets, and even achieves comparable performance with several retrieval-augmented fine-tuned models.",1e122149779c644855d1cccca5d96135db0482cb,Semantic Scholar,,somewhat relevant,"The paper mentions 'enhancing their problem-solving ability with novel prompting techniques', indicating its relevance to prompt engineering." differentially private incontext learning,"['Ashwinee Panda', 'Tong Wu', 'Jiachen T. Wang', 'Prateek Mittal']",https://arxiv.org/pdf/2305.01639,,,An important question in deploying large language models (LLMs) is how to augment LLMs with private data. We propose Differentially Private In-context Learning (DP-ICL) to enable LLMs to adapt to new tasks while maintaining privacy guarantees. DP-ICL performs private inference by establishing a noisy consensus over an ensemble of exemplars using the Report-Noisy-Max mechanism. We evaluate DP-ICL on four benchmarks and find that it achieves comparable performance (< 2% degradation) with non-private ICL.,227dcfb8f289b9629997da8572cfa84a3a016e2e,Semantic Scholar,,highly relevant,"The paper discusses the CoT-Influx approach, which involves refining the prompt with more concise examples to improve LLM reasoning, directly relating to the prompt engineering, especially in the context of hard prefix prompts." spot better frozen model adaptation through soft prompt transfer,"['Tu Vu', 'Brian Lester', 'Noah Constant', 'Rami Al-Rfou', 'Daniel Matthew Cer']",https://aclanthology.org/2022.acl-long.346.pdf,2021-10-15,,"There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks. Building on the Prompt Tuning approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPoT: Soft Prompt Transfer. SPoT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. We show that SPoT significantly boosts the performance of Prompt Tuning across many tasks. More remarkably, across all model sizes, SPoT matches or outperforms standard Model Tuning (which fine-tunes all model parameters) on the SuperGLUE benchmark, while using up to 27,000× fewer task-specific parameters. To understand where SPoT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer. Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task.",c28b7dfe341f1e13a5a98efbce7946ef795cf9b8,Semantic Scholar,,highly relevant,"The paper discusses the use of prompting techniques ('Miniature & Mull' and 'Explain & Compare') to assist LLMs in evaluating SQL queries, which is directly related to prompt engineering." exploring transfer learning in medical image segmentation using visionlanguage models,"['K. Poudel', 'Manish Dhakal', 'Prasiddha Bhandari', 'Rabin Adhikari', 'Safal Thapaliya', 'Bishesh Khanal']",https://arxiv.org/pdf/2308.07706,2023-08-15,,"Medical image segmentation with deep learning is an important and widely studied topic because segmentation enables quantifying target structure size and shape that can help in disease diagnosis, prognosis, surgery planning, and understanding. Recent advances in the foundation VLMs and their adaptation to segmentation tasks in natural images with VLSMs have opened up a unique opportunity to build potentially powerful segmentation models for medical images that enable providing helpful information via language prompt as input, leverage the extensive range of other medical imaging datasets by pooled dataset training, adapt to new classes, and be robust against out-of-distribution data with human-in-the-loop prompting during inference. Although transfer learning from natural to medical images for image-only segmentation models has been studied, no studies have analyzed how the joint representation of vision-language transfers to medical images in segmentation problems and understand gaps in leveraging their full potential. We present the first benchmark study on transfer learning of VLSMs to 2D medical images with thoughtfully collected 11 existing 2D medical image datasets of diverse modalities with carefully presented 9 types of language prompts from 14 attributes. Our results indicate that VLSMs trained in natural image-text pairs transfer reasonably to the medical domain in zero-shot settings when prompted appropriately for non-radiology photographic modalities; when finetuned, they obtain comparable performance to conventional architectures, even in X-rays and ultrasound modalities. However, the additional benefit of language prompts during finetuning may be limited, with image features playing a more dominant role; they can better handle training on pooled datasets combining diverse modalities and are potentially more robust to domain shift than the conventional segmentation models.",b18daa14486920016c4664c3ed1759f2de1ba854,Semantic Scholar,,highly relevant,"The paper analyzes human likeness in GPT-3.5 generated comments using multiple prompting techniques, directly tying into prompt engineering." multimodal prompt learning in emotion recognition using context and audio information,"['Eunseo Jeong', 'Gyu-Min Kim', 'Sangwoo Kang']",https://www.mdpi.com/2227-7390/11/13/2908/pdf?version=1688017556,2023-06-28,,"Prompt learning has improved the performance of language models by reducing the gap in language model training methods of pre-training and downstream tasks. However, extending prompt learning in language models pre-trained with unimodal data to multimodal sources is difficult as it requires additional deep-learning layers that cannot be attached. In the natural-language emotion-recognition task, improved emotional classification can be expected when using audio and text to train a model rather than only natural-language text. Audio information, such as voice pitch, tone, and intonation, can give more information that is unavailable in text to predict emotions more effectively. Thus, using both audio and text can enable better emotion prediction in speech emotion-recognition models compared to semantic information alone. In this paper, in contrast to existing studies that use multimodal data with an additional layer, we propose a method for improving the performance of speech emotion recognition using multimodal prompt learning with text-based pre-trained models. The proposed method is using text and audio information in prompt learning by employing a language model pre-trained on natural-language text. In addition, we propose a method to improve the emotion-recognition performance of the current utterance using the emotion and contextual information of the previous utterances for prompt learning in speech emotion-recognition tasks. The performance of the proposed method was evaluated using the English multimodal dataset MELD and the Korean multimodal dataset KEMDy20. Experiments using both the proposed methods obtained an accuracy of 87.49%, F1 score of 44.16, and weighted F1 score of 86.28.",1383f2b0a9debfa2f26d963c5fd04fcee6e9bb6f,Semantic Scholar,,somewhat relevant,"The paper mentions the use of a format-prompting technique to convert open-ended questions into a closed-form format, indicating relevance to prompt engineering." subjective cognitive complaints and objective memory performance influence prompt preference for instrumental activities of daily living,"['Emily J Van Etten', 'A. Weakley', 'M. Schmitter-Edgecombe', 'D. Cook']",https://europepmc.org/articles/pmc5597053?pdf=render,2016-04-27,,"INTRODUCTION Declines in memory and executive functioning often lead to difficulties completing instrumental activities of daily living (IADLs). Prompting technologies have the potential to help promote aging in place by providing support for the initiation and accurate completion of IADLs. In this study, we evaluate preferences of older adults for different levels of prompting support based on subjective and objective measures of cognitive functioning. METHOD Participants were 170 community-dwelling older adults split into two cognitive complaint groups: cognitive complaints and few cognitive complaints. After completing six IADL tasks (e.g., organize a pillbox, cook), each participant was asked to make a specific error (e.g., leave stove on) on three of the tasks. They were then prompted to correct the error with one of three different prompt modes: verbal indirect, verbal direct, multimodal verbal direct and video. RESULTS The cognitive complaints group reported greater preference for the multimodal prompt compared to the few cognitive complaints group. The indirect prompt was the least preferred by both groups. Furthermore, participants who recalled less on objective memory measures preferred more support in terms of prompt mode. Executive functioning did not appear to be related to prompt preference. CONCLUSION Level of subjective cognitive complaints and objective memory performance may influence amount of support preferred in a prompt.",3ad332948f098cbd469a80a456f55dbcd4428aa1,Semantic Scholar,,highly relevant,"The paper explores the application of LLMs using zero-shot and Chain-of-Thought prompting techniques to enhance ASR accuracy in medical transcription, directly engaging with prompt engineering strategies." winning solution for the cvpr2023 visual anomaly and novelty detection challenge multimodal prompting for datacentric anomaly detection,"['Yunkang Cao', 'Xiaohao Xu', 'Chen Sun', 'Yuqi Cheng', 'Liang Gao', 'Weiming Shen']",https://arxiv.org/pdf/2306.09067,,,"This technical report introduces the winning solution of the team Segment Any Anomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge. Going beyond uni-modal prompt, e.g., language prompt, we present a novel framework, i.e., Segment Any Anomaly + (SAA + ), for zero-shot anomaly segmentation with multi-modal prompts for the regularization of cascaded modern foundation models. Inspired by the great zero-shot generalization ability of foundation models like Segment Anything, we first explore their assembly (SAA) to leverage diverse multi-modal prior knowledge for anomaly localization. Subsequently, we further introduce multimodal prompts (SAA + ) derived from domain expert knowledge and target image context to enable the non-parameter adaptation of foundation models to anomaly segmentation. The proposed SAA + model achieves state-of-the-art performance on several anomaly segmentation benchmarks, including VisA and MVTec-AD, in the zero-shot setting. We will release the code of our winning solution for the CVPR2023 VAND challenge at https:/",d773a472101e4d23cdd1f5fc96c1f61a9c0f90a2,Semantic Scholar,,somewhat relevant,"The paper highlights susceptiveness to prompt hacking, which directly relates to the use and manipulation of prompts in interactions with LLMs." multimodal prompts effectively elicit robotinitiated social touch interactions,"['Spatika Sampath Gujran', 'Merel M. Jung']",https://dl.acm.org/doi/pdf/10.1145/3610661.3617642,2023-10-09,,"Social touch plays an important role in building interpersonal relationships and might therefore also facilitate interactions with social robots. As people tend to have less experience interacting with social robots compared to with humans, especially with interactions involving social touch, more explicit communication might be necessary to disambiguate social intentions. In the experiment, participants engaged in an informal conversation with humanoid robot Pepper. Throughout the interaction, Pepper initiated various social touch interactions such as a handshake during introductions and a hug to say goodbye by using either a unimodal prompt (control condition: movement cue only) or a multimodal prompt (experimental condition: movement and verbal cue). The results show that the multimodal prompts significantly increased the number of successfully elicited social touch interactions. No significant differences in the self-reported perception of the robot were found between condition. Our results help to inform the design of robots that are intended to engage in social touch interactions.",ef260d49059e936335bfa17db6b358f3dfc2a65b,Semantic Scholar,,somewhat relevant,"The paper describes the use of prompting techniques to elicit responses from a Large Language Model (GPT) for economic decision-making tasks, indicating its relevance to prompt engineering." are there differences in female sexuality related to educational level,"['I. Y. Abdallah', 'H. Elhadi', 'S. Younis']",https://bjas.journals.ekb.eg/article_136338_df7ff52e971066be132445c946059e7a.pdf,2020-07-01,,"Sexual knowledge is a collection of information and refers to the knowledge and awareness of the individual about sex and sexuality (including physiological aspects, reproduction, performance, and individual sexual behavior). Absence of sexual information is related with an expansion in powerlessness, which makes a setting for the rise of sexual issue. Training not just has a constructive job in anticipation of adverse results, for example, explicitly transmitted contaminations ,sexual maltreatment and sexual discouragement, yet in addition prompts constructive results at individual levels and relational relationships.evaluate the effect of various instructive levels on female sexuality in an example of Egyptian wedded women.a self-report survey planned by the creators guided by the female sexual capacity index[1] .The point of the investigation and the subtleties of the poll were disclosed to the ladies before taking their educated assent. Members were 300 hitched ladies going to the outpatient center in Benha University emergency clinic, Maternal and Childhood care units in Benha city, during the period from October 2019 to May 2020.They were educated about the idea of the investigation and requested to partake before taking their educated consent.increasing in level of training prompts progressively sexual satisfaction.The present examination discoveries show that Increase in the degree of instruction prompts expanded sexual movement (want, grease, sexual fulfillment) and to increasingly sexual fulfillment. Sexual information and experience increments with expanding level of training. Ladies with elevated level of training can manage any issue identified with sexuality so female sexual dysfunctions are less in profoundly taught women.Educational level and sexual fulfillment were fundamentally related.",1278b0855d40d3cf008c3b790651c50d3c91e5cd,Semantic Scholar,,highly relevant,"The paper details the use of large language models prompted with game context and agent observations to play open-world survival games, indicating a direct application of prompt engineering." teaching foster grandparents to train severely handicapped persons,"['P. Fabry', 'D. Reid']",https://europepmc.org/articles/pmc1311274?pdf=render,1978-03-01,,"Five foster grandparents were taught training skills for use in their daily interactions with severely handicapped persons in an institution. Following baseline, specific teaching procedures consisting of teacher instructions, prompts, modelling, and praise were implemented. The grandparents' frequency of training three skill areas increased as the specific teaching was implemented in multiple-baseline format. The total amount of training continued as teacher instructions, prompts, and modelling were terminated and praise continued, although the grandparents spent their training time emphasizing only two of the three skill areas. Teacher presence was gradually reduced over an 11-week period, with no decrease in grandparents' frequency of training. Four of the foster grandchildren, all profoundly retarded and multiply handicapped, demonstrated progress throughout the study. Results were discussed in light of the available contributions of foster grandparents in institutional settings and maintenance of staff training.",1efc61280d9cd182f05f2f49680af6f5290fc747,Semantic Scholar,,highly relevant,"The paper focuses on personalizing LLM outputs using context from users' histories, which involves prompt augmentation, directly relating to prompt engineering." an assessment of digital stimulus prompts to teach conditional discriminations to children with autism,['Haven Niland'],https://digital.library.unt.edu/ark:/67531/metadc2179316/m2/1/high_res_d/NILAND-DISSERTATION-2023.pdf,,,"Effective and efficient skill-acquisition procedures must be identified to support individualized behavioral programming for children with autism spectrum disorder (ASD). To do this, practitioners and researchers may use assessment-based instruction. Prompts are a common teaching strategy to promote skill acquisition. The purpose of this applied study was to use assessment-based instruction to evaluate the efficacy and efficiency of within- and extra-stimulus prompts to teach conditional discriminations to two children with ASD. We identified stimulus prompts using a survey of popular children's games and conducted a tablet-based instruction readiness assessment. Stimulus prompts involved motion (within-stimulus) and pointing (extra-stimulus) to evoke correct responses in the presence of a discriminative stimulus. We used an adapted alternating treatments design with a no-treatment control condition to evaluate the effects of both prompt types across multiple sets of stimuli. Both stimulus prompt types were efficacious in facilitating skill acquisition for two of three participants. Little difference was observed in the time to mastery with either prompt. Neither stimulus prompt was efficacious for the third participant. Assessment results will be used to inform clinical programming to teach conditional discriminations to participants and contribute to research on designing and implementing assessments of skill-acquisition procedures.",1f748cf0a830672ec516009789d720c62828979e,Semantic Scholar,,highly relevant,"The paper explores using user preferences as natural language standing instructions in LLM prompts, directly relevant to hard prefix prompting in prompt engineering." the potential of the hybrid course vis‐à‐vis online and traditional courses,['D. Brunner'],https://digitalcommons.georgefox.edu/cgi/viewcontent.cgi?article=1022&context=gfes,2006-10-01,,"Face-to-face, hybrid, and online courses are part of the panoply of course options available to stu- dents and teachers in the twenty-first century. This essay tackles the promise of hybrid courses for enhancing student learning in seminary contexts. The author con- tends that the introduction of hybrid instruction prompts faculty to revisit questions about pedagogy and improves student learning.",3a57d97f7b39958167a8617b73914054af7bf7da,Semantic Scholar,,highly relevant,"The paper specifically discusses the use of LLM prompting for code generation at a repository scale and examines methods applicable for commercial use, aligning with the focus on prompt engineering, especially within the context of enhancing performance through iterative refinement of prompts." reframing instructional prompts to gptk’s language,"['Swaroop Mishra', 'Daniel Khashabi', 'Chitta Baral', 'Yejin Choi', 'Hannaneh Hajishirzi']",https://aclanthology.org/2022.findings-acl.50.pdf,2021-09-16,,"What kinds of instructional prompts are easier to follow for Language Models (LMs)? We study this question by conducting extensive empirical analysis that shed light on important features of successful instructional prompts. Specifically, we study several classes of reframing techniques for manual reformulation of prompts into more effective ones. Some examples include decomposing a complex task instruction into multiple simpler tasks or itemizing instructions into sequential steps. Our experiments compare the zero-shot and few-shot performance of LMs prompted with reframed instructions on 12 NLP tasks across 6 categories. Compared with original instructions, our reframed instructions lead to significant improvements across LMs with different sizes. For example, the same reframed prompts boost few-shot performance of GPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over all tasks. Furthermore, reframed instructions reduce the number of examples required to prompt LMs in the few-shot setting. We hope these empirically-driven techniques will pave the way towards more effective future prompting algorithms.",3ac5aa6ac59253611ef3cb72a95cbe21ef5dda1b,Semantic Scholar,,somewhat relevant,"The paper discusses using the ChatGPT API with specific input (students' code) to generate feedback, implying the use of prompts, but does not focus on the engineering or optimization of these prompts." "the perceived effectiveness of nonverbal, coverbal, and verbal string ensemble instruction student, teacher, and observer views",['R. MacLeod'],http://libres.uncg.edu/ir/uncg/f/R_MacLeod_Perceived_2018.pdf,2018-06-01,,"The purpose of this study was to determine how students, teachers, and outside observers perceive teaching effectiveness within a university-level string ensemble rehearsal setting. Students, teachers, and observers reflected on six rehearsal segments that used primarily nonverbal, co-verbal, or verbal instruction as outlined by Bob Culver in the Master Teacher Profile. Overall, participants viewed the verbal teaching episodes as being most effective, and expressed a preference for several elements associated with the verbal instructional mode. Five common elements of effective rehearsals identified by participants were Specific Instructions and Feedback, Delivery Skills and Eye Contact, Audible and Focused Co-Verbal Instruction Prompts, Conducting Effectiveness, and Ensemble Progress. Effectiveness perceptions were colored by participants’ sense of each teacher’s comfort with the different instructional modes as well as the elements of rehearsal teaching they personally valued.",4ee9b363de00d23fa8fb7a7a360024a63242f4e9,Semantic Scholar,,highly relevant,"The paper introduces 'autoprompting technique' for input generation, which indicates it involves prompt engineering." implementation of aba (applied behaviour analysis) therapy for children with autism spectrum disorders at the therapy center in the yogyakarta special area,['Murti Muninggar'],https://journal2.um.ac.id/index.php/jppplb/article/download/22357/pdf,2021-07-31,,"This study aims to determine: (1) To explore what techniques in ABA (applied behavior analysis) therapy given by therapists to children with spectrum disorders at the therapy center of the Special Region of Yogyakarta, (2) To implement the implementation of the ABA (applied behavior analysis) therapy process given by the therapist to spectrum disorder children at the Yogyakarta Special Region therapy center, and (3) To reveal whether or not the ABA (applied behavior analysis) therapy given by the therapist to spectrum disorder children in a therapy center for the Special Region of Yogyakarta. The research method in this article uses qualitative research methods. This type of research uses descriptive analysis. Data collection techniques in this study using observation, interviews, and documentation. Data analysis techniques use data reduction, data presentation, and conclusion. Sources of data in this study using primary data and secondary data. The research subjects consisted of therapists, parents, and children with autism spectrum disorders. The results of this study indicate that: (1) ABA (applied behavior analysis) therapy techniques given by therapists to children with spectrum disorders include: instructions, prompts, and rewards, (2) The implementation of ABA (applied behavior analysis) therapy by the therapist is quite good because in this therapy center there is already an adequate program to support the success of children in their development, (3) ABA (applied behavior analysis) therapy given by therapists to children with autism spectrum disorders is very effective because from this therapy the therapists teach children to follow instructions, respond to the words of others, and imitate.",4fa4e9c6bc8f27137c81f65188ddd3f8caff0d49,Semantic Scholar,,somewhat relevant,"The paper mentions optimizing non-differentiable language-shaped reward functions generated by few-shot LLM prompting, directly indicating relevance to prompt engineering." genetic prompt search via exploiting language model probabilities,"['Jiangjiang Zhao', 'Zhuoran Wang', 'Fan Yang']",https://www.ijcai.org/proceedings/2023/0588.pdf,2023-08-01,,"Prompt tuning for large-scale pretrained language models (PLMs) has shown remarkable potential, especially in low-resource scenarios such as few-shot learning. Moreover, derivative-free optimisation (DFO) techniques make it possible to tune prompts for a black-box PLM to better fit downstream tasks. However, there are usually preconditions to apply existing DFO-based prompt tuning methods, e.g. the backbone PLM needs to provide extra APIs so that hidden states (and/or embedding vectors) can be injected into it as continuous prompts, or carefully designed (discrete) manual prompts need to be available beforehand, serving as the initial states of the tuning algorithm. To waive such preconditions and make DFO-based prompt tuning ready for general use, this paper introduces a novel genetic algorithm (GA) that evolves from empty prompts, and uses the predictive probabilities derived from the backbone PLM(s) on the basis of a (few-shot) training set to guide the token selection process during prompt mutations. Experimental results on diverse benchmark datasets show that the proposed precondition-free method significantly outperforms the existing DFO-style counterparts that require preconditions, including black-box tuning, genetic prompt search and gradient-free instructional prompt search.",7a29fb7a37869126840ed71ac7671db2e985f443,Semantic Scholar,,highly relevant,"The paper focuses on in-context learning for machine translation using GPT-4, which directly relates to the use of prompts to improve performance without task-specific fine-tuning." "systematic replication of the effects of a supplementary, technologyassisted, storybook intervention for preschool children with weak vocabulary and comprehension skills","['C. Greenwood', 'J. Carta', 'Gabriela Guerrero', 'J. Atwater', 'E. Kelley', 'Na Young Kong', 'H. Goldstein']",https://irl.umsl.edu/context/espp/article/1034/viewcontent/686223.pdf,2016-05-19,,"In 2013, Spencer, Goldstein, Sherman, et al. reported the promising effects of a supplemental, technology-assisted, storybook intervention (Tier 2) containing embedded instruction targeting the oral language learning of preschool children at risk for delays. We sought to advance knowledge of the intervention by replicating it in a new sample and examining children’s responses to the narrator’s instructional prompts and associations with learning outcomes. Results indicated that children were highly successful in responding with the narrator’s task-management prompts (i.e., turn the page), particularly after the first book. Children were much less proficient in correctly responding to the narrator’s word-teaching prompts (i.e., “say enormous”), but improved over additional storybooks. Exposure to the intervention accelerated children’s weekly oral language learning, and effect sizes were comparable to those of Spencer et al. Children’s increased word knowledge was positively correlated with their correct responding to the narrator’s word-teaching prompts in particular. Implications for research and practice are discussed.",916dfa8cac8356c6c5aeeb5718e1b51db2a81c0d,Semantic Scholar,,highly relevant,"The paper discusses Automatic In-Context Learning, which leverages self-produced contexts, breaking away from traditional model training and moving into the realm of post-training prompt engineering." "learning to reason the influence of instruction, prompts and scaffolding, metacognitive knowledge, and general intelligence on informal reasoning about everyday social and political issues",['David Perkins'],https://www.cambridge.org/core/services/aop-cambridge-core/content/view/FE688448007AA83A3A346BEC4DE912B8/S1930297500005350a.pdf/div-class-title-learning-to-reason-the-influence-of-instruction-prompts-and-scaffolding-metacognitive-knowledge-and-general-intelligence-on-informal-reasoning-about-everyday-social-and-political-issues-div.pdf,2019-11-01,,"Twelve experiments examined ways of improving informal reasoning, as assesed by presenting students with accessible, current, and interesting social and political issues, eliciting reasoning about them, and scoring the reasoning for quality of argument. The experiments addressed: (1) the impact of established instructional programs that emphasized critical thinking (Experiments 1–4); (2) the impact of an investigator-designed high school level minicourse (Experiments 5–7); (3) the responsiveness of subjects to prompts that asked them to develop arguments more fully, and the relation of their responses to general intelligence (Experiments 8–10); (4) checks on the validity of the testing methodology (Experiments 11–12). Two of the established instructional programs had a beneficial effect. The minicourse had a particularly large effect on students’ attention to the other side of the case, the most neglected aspect of informal reasoning. The prompting studies showed that subjects could develop their arguments far more than they normally did. Finally, subjects with higher intelligence were actually somewhat more biased in their reasoning. In summary: people can reason much better than they typically do on the sorts of issues posed; people are not performing near the limits of their abilities; strategies and standards of good reasoning can improve reasoning; and education can develop students’ reasoning much further than education typically does.",932183fec732ec617b592fdef5da3ff7222f0797,Semantic Scholar,,somewhat relevant,"The paper discusses the use of modulated text prompts in conjunction with a vision model, focusing on how visual context influences prompt effectiveness, which aligns with prompt engineering concepts." speciallydesigned outoforder processor architecture for microcontrollers,"['Yunhao Hu', 'Jie Chen', 'Kaiben Zhu', 'Qijun Xing', 'Wei Liu', 'Jun Shen', 'Ge Gao']",https://www.mdpi.com/2079-9292/11/19/2989/pdf?version=1663754025,2022-09-21,,"In very large-scale integration circuit (VLSI) systems, microcontrollers are often implanted to manage the whole system to complete the given computing tasks. They play an essential part as regulators, which should allocate resources steadily and issue instructions promptly to drive functional units. However, most of the recent research focuses on the operation at the software level or the scheduling at the SoC level, ignoring the impact of the microarchitecture and the features of controlled sub-modules. This paper analyzes the requirements of microcontrollers in the VLSI system with various constraints and conditions that should be considered in the hardware implementation of such microarchitecture. Furthermore, this paper takes an open-source design using RISC-V ISA as the prototype to implement hardware microarchitecture. This design integrates the techniques of out-of-order processing, which are usually used on superscalar processors. As a result, the design quadruples the number of pipelined instructions, greatly alleviating the stalling of the instruction stream with a maximum extra look up table utilization of 18.37% in FPGA implementation.",b5681fa49b027acb929e71091aafc4f0dec1871f,Semantic Scholar,,highly relevant,"The paper mentions the use of textual description and visual examples as multimodal prompts for in-context learning, indicating its relevance to the topic of prompt engineering." the face of a surgeon an analysis of demographic representation in three leading artificial intelligence texttoimage generators,"['R. Ali', 'Oliver Y. Tang', 'Ian D. Connolly', 'H. Abdulrazeq', 'Fatima N. Mirza', 'Rachel K. Lim', 'Benjamin R. Johnston', 'Michael', 'W. Groff', 'Theresa Williamson', 'K. Svokos', 'Tiffany J. Libby', 'John H. Shin', 'Z. Gokaslan', 'Curtis E. Doberstein', 'James Zou', 'Wael Asaad']",https://www.medrxiv.org/content/medrxiv/early/2023/05/29/2023.05.24.23290463.full.pdf,2023-05-29,,"Background: This study investigates the accuracy of three prominent artificial intelligence (AI) text-to-image generators-DALL-E 2, Midjourney, and Stable Diffusion-in representing the demographic realities in the surgical profession, addressing raised concerns about the perpetuation of societal biases, especially profession-based stereotypes. Methods: A cross-sectional analysis was conducted on 2,400 images generated across eight surgical specialties by each model. An additional 1,200 images were evaluated based on geographic prompts for three countries. Images were generated using a prompt template, ""A photo of the face of a [blank]"", with blank replaced by a surgical specialty. Geographic-based prompting was evaluated by specifying the most populous countries for three continents (United States, Nigeria, and China). Results: There was a significantly higher representation of female (average=35.8% vs. 14.7%, P<0.001) and non-white (average=37.4% vs. 22.8%, P<0.001) surgeons among trainees than attendings. DALL-E 2 reflected attendings' true demographics for female surgeons (15.9% vs. 14.7%, P=0.386) and non-white surgeons (22.6% vs. 22.8%, P=0.919) but underestimated trainees' representation for both female (15.9% vs. 35.8%, P<0.001) and non-white (22.6% vs. 37.4%, P<0.001) surgeons. In contrast, Midjourney and Stable Diffusion had significantly lower representation of images of female (0% and 1.8%, respectively) and non-white (0.5% and 0.6%, respectively) surgeons than DALL-E 2 or true demographics (all P<0.001). Geographic-based prompting increased non-white surgeon representation (all P<0.001), but did not alter female representation (P=0.779). Conclusions: While Midjourney and Stable Diffusion amplified societal biases by depicting over 98% of surgeons as white males, DALL-E 2 depicted more accurate demographics, although all three models underestimated trainee representation. These findings underscore the necessity for guardrails and robust feedback systems to prevent AI text-to-image generators from exacerbating profession-based stereotypes, and the importance of bolstering the representation of the evolving surgical field in these models' future training sets.",0c8cabcabd56ea48958bde8535a9da5ef5a7368c,Semantic Scholar,,somewhat relevant,"The paper discusses the architecture of transformers in enhancing in-context learning, specifically mentioning how transformers learn from the prompt, which indicates the use of prompting." using large language models to simulate multiple humans,"['Gati Aher', 'RosaI. Arriaga', 'A. Kalai']",https://arxiv.org/pdf/2208.10264,,,"We propose a method for using a large language model, such as GPT-3, to simulate responses of different humans in a given context. We test our method by attempting to repro- duce well-established economic, psycholinguistic, and social experiments. The method requires prompt templates for each experiment. Simulations are run by varying the (hypotheti-cal) subject details, such as name, and analyzing the text gen- erated by the language model. To validate our methodology, we use GPT-3 to simulate the Ultimatum Game , garden path sentences , risk aversion , and the Milgram Shock experiments. In order to address concerns of exposure to these studies in training data, we also evaluate simulations on novel variants of these studies. We show that it is possible to simulate re- sponses of different people and that their responses are consistent with prior human studies from the literature. Across all studies, the distributions generated by larger language models better align with prior experimental results, suggesting a trend that future language models may be used for even more faithful simulations of human responses. Our use of a lan- guage model for simulation is contrasted with anthropomor-phic views of a language model as having its own behavior.",21f377c5d89f85f2bd802f4f6abe1df4748ec07b,Semantic Scholar,,highly relevant,"The paper discusses using a task-unified prompt in the context of skeleton sequence modeling, which directly relates to using prompting techniques for task accomplishment." research on implicit intent recognition method based on prompt learning,"['Shuhua Liu', 'Lanting Li', 'Ming Fang', 'Chih-Cheng Hung', 'Shihao Yang']",https://www.researchsquare.com/article/rs-1891913/latest.pdf,,,"As one of the core modules of the dialogue system, intent recognition plays an important role in human-computer interaction. Most of the existing intent recognition research is limited to simple, direct, and explicit intent recognitions. However, the natural human-computer interactions are flexible and diverse, and the expressions are often the euphemistic implicit intentions. Therefore, the implicit intent recognition brings new research challenges in this field. This paper pioneers a Chinese Implicit Intent Dataset CIID, which covers 7 common intents from different fields, and the data is the text containing the user’s implicit intent. Based on this corpus, it is the first time prompt learning is employed for implicit intent recognition and by constructing a suitable prompt template, the model can get ”relevant hints” to dig out the true intention of the user. Finally, this paper evaluates a range of classification models on CIID dataset. Experimental results show that the recognition rate of the proposed model is 97.6%, and achieves the state-of-the-art recognition accuracy. Furthermore, since it is difficult to collect the user’s implicit intention data, this paper also explores the performance of these classification models on the CIID dataset with few-shot settings, and the experimental results show when the training data is reduced to 4.7%, the recognition rate of the proposed model can still keep 92.4%, which is significantly higher than other baseline models, the results further prove this proposed method is advanced and robust.",330009bebca2152592067c9616e0d86505d49e27,Semantic Scholar,,somewhat relevant,"The paper discusses leveraging 'Knowledge and Few-shot Enhancement In-context Learning (KFE)' framework for LLMs, which aligns with the concept of prompt engineering, specifically through an application of in-context learning techniques." knowledgeguided prompt learning for fewshot text classification,"['Liangguo Wang', 'Ruoyu Chen', 'Li Li']",https://www.mdpi.com/2079-9292/12/6/1486/pdf?version=1679462243,2023-03-21,,"Recently, prompt-based learning has shown impressive performance on various natural language processing tasks in few-shot scenarios. The previous study of knowledge probing showed that the success of prompt learning contributes to the implicit knowledge stored in pre-trained language models. However, how this implicit knowledge helps solve downstream tasks remains unclear. In this work, we propose a knowledge-guided prompt learning method that can reveal relevant knowledge for text classification. Specifically, a knowledge prompting template and two multi-task frameworks were designed, respectively. The experiments demonstrated the superiority of combining knowledge and prompt learning in few-shot text classification.",3cd05e8137676c8a7e488d7b621b8fb3f2f2a399,Semantic Scholar,,somewhat relevant,"The paper describes using in-context learning for automated distractor and feedback generation, which implies the use of prompts in guiding large language models." large language models are complex table parsers,"['Bowen Zhao', 'Changkai Ji', 'Yuejie Zhang', 'Wen He', 'Yingwen Wang', 'Qing Wang', 'Rui Feng', 'Xiaobo Zhang']",https://aclanthology.org/2023.emnlp-main.914.pdf,2023-12-13,,"With the Generative Pre-trained Transformer 3.5 (GPT-3.5) exhibiting remarkable reasoning and comprehension abilities in Natural Language Processing (NLP), most Question Answering (QA) research has primarily centered around general QA tasks based on GPT, neglecting the specific challenges posed by Complex Table QA. In this paper, we propose to incorporate GPT-3.5 to address such challenges, in which complex tables are reconstructed into tuples and specific prompt designs are employed for dialogues. Specifically, we encode each cell's hierarchical structure, position information, and content as a tuple. By enhancing the prompt template with an explanatory description of the meaning of each tuple and the logical reasoning process of the task, we effectively improve the hierarchical structure awareness capability of GPT-3.5 to better parse the complex tables. Extensive experiments and results on Complex Table QA datasets, i.e., the open-domain dataset HiTAB and the aviation domain dataset AIT-QA show that our approach significantly outperforms previous work on both datasets, leading to state-of-the-art (SOTA) performance.",5d2b77ae8508e277fe9b840a471b7dfb00e806ff,Semantic Scholar,,highly relevant,"The paper focuses on leveraging in-context learning by using demonstrations in precondition prompts for tasks, which aligns with the principles of prompt engineering." hybridprompt bridging language models and human priors in prompt tuning for visual question answering,"['Zhiyuan Ma', 'Zhihuan Yu', 'Jianjun Li', 'Guohui Li']",https://ojs.aaai.org/index.php/AAAI/article/download/26569/26341,2023-06-26,,"Visual Question Answering (VQA) aims to answer the natural language question about a given image by understanding multimodal content. However, the answer quality of most existing visual-language pre-training (VLP) methods is still limited, mainly due to: (1) Incompatibility. Upstream pre-training tasks are generally incompatible with downstream question answering tasks, which makes the knowledge from the language model not well transferable to downstream tasks, and greatly limits their performance in few-shot scenarios; (2) Under-fitting. They generally do not integrate human priors to compensate for universal knowledge from language models, so as to fit the challenging VQA problem and generate reliable answers. To address these issues, we propose HybridPrompt, a cloze- and verify-style hybrid prompt framework with bridging language models and human priors in prompt tuning for VQA. Specifically, we first modify the input questions into the cloze-style prompts to narrow the gap between upstream pre-training tasks and downstream VQA task, which ensures that the universal knowledge in the language model can be better transferred to subsequent human prior-guided prompt tuning. Then, we imitate the cognitive process of human brain to introduce topic and sample related priors to construct a dynamic learnable prompt template for human prior-guided prompt learning. Finally, we add fixed-length learnable free-parameters to further enhance the generalizability and scalability of prompt learning in the VQA model. Experimental results verify the effectiveness of HybridPrompt, showing that it achieves competitive performance against previous methods on widely-used VQAv2 dataset and obtains new state-of-the-art results. Our code is released at: https://github.com/zhizhi111/hybrid.",6470a35a46bbc8a844954af9fdf31e440d1aa289,Semantic Scholar,,somewhat relevant,"The paper focuses on a multimodal large language model that employs in-context learning (ICL) which is a form of prompt engineering, despite not explicitly mentioning hard prefix prompts." the utility of chatgpt for cancer treatment information,"['Shan Chen', 'B. Kann', 'M. Foote', 'H. Aerts', 'G. Savova', 'R. Mak', 'D. Bitterman']",https://www.medrxiv.org/content/medrxiv/early/2023/03/23/2023.03.16.23287316.full.pdf,2023-03-23,,"The use of large language models (LLMs) such as ChatGPT for medical question-answering is becoming increasingly popular. However, there are concerns that these models may generate and amplify medical misinformation. Because cancer patients frequently seek to educate themselves through online resources, some individuals will likely use ChatGPT to obtain cancer treatment information. This study evaluated the performance and robustness of ChatGPT in providing breast, prostate, and lung cancer treatment recommendations that align with National Comprehensive Cancer Network (NCCN) guidelines. Four prompt templates were created to explore how differences in how the query is posed impacts response. ChatGPT output was scored by 3 oncologists and a 4th oncologist adjudicated in cases of disagreement. ChatGPT provided at least one NCCN-concordant recommendation for 102/104 (98%) prompts. However, 35/102 (34.3%) of these also included a recommendation that was at least partially non-concordant with NCCN guidelines. Responses varied based on prompt type. In conclusion, ChatGPT did not perform well at reliably and robustly providing cancer treatment recommendations. Patients and clinicians should be aware of the limitations of ChatGPT and similar technologies for self-education.",763d953e671e2b6c6d0df2f5bc5472fb6ce074de,Semantic Scholar,,somewhat relevant,"The paper focuses on in-context learning (ICL), a technique closely related to prompt engineering, as it involves using examples to guide model understanding, fitting the concept of using prompts." assessing the accuracy of chatgpt use for risk management in construction projects,['H. Aladağ'],https://www.mdpi.com/2071-1050/15/22/16071/pdf?version=1700231602,2023-11-17,,"Artificial Intelligence (AI) is considered promising digital technology that has important opportunities for enhancing project oversight and delivering improved decision-making in the risk management domain. However, there is a limited amount of research that has evaluated AI tools’ performance in risk management. Therefore, with the intention of sustaining more accurate risk-based decision-making process in the construction industry, this paper investigates the accuracy of ChatGPT in risk management for different project types. In this context, Key Performance Indicators (KPIs) related to each risk management sub-process were determined, and then a questionnaire that consisted of prompt templates was prepared for collecting data from ChatGPT. Afterwards, ChatGPT’s responses were evaluated by experts with focus group sessions. The findings indicate that ChatGPT has a moderate level of performance in managing risks. It provides more accurate knowledge in risk response and risk monitoring rather than risk identification and risk analysis sub-processes. This research paves the way for future studies by demonstrating an implication of ChatGPT use for risk-based decision making. In addition, gaining insight into the precision of ChatGPT in the risk-based decision-making process will empower decision-makers to establish resilience in business operations through technology-driven risk management.",82460a2aca3276a2a90d63a3c6b0f26ed834cecb,Semantic Scholar,,highly relevant,"The paper introduces a new ICL framework for few-shot nested NER with enhancements in prompt design, aligning directly with prompt engineering." reducing spurious correlations in aspectbased sentiment analysis with explanation from large language models,"['Qianlong Wang', 'Keyang Ding', 'Bin Liang', 'Min Yang', 'Ruifeng Xu']",https://aclanthology.org/2023.findings-emnlp.193.pdf,,,"Recently, aspect-based sentiment analysis (ABSA) models have yielded promising results. However, they are susceptible to learning spurious correlations between certain words of the input text and output labels while modeling the sentiment feature of the aspect. This spurious correlation will potentially undermine the performance of ABSA models. One direct so-lution for this problem is to make the model see and learn an explanation of sentiment expression rather than certain words. Motivated by this, we exploit explanations for the sentiment polarity of each aspect from large language models (LLMs) to reduce spurious correlations in ABSA. First, we formulate a prompt template that wraps the sentence, an aspect, and the sentiment label. This template is utilized to prompt LLMs to generate an appropriate explanation that states the sentiment cause. Then, we propose two straightforward yet effective meth-ods to leverage the explanation for preventing the learning of spurious correlations. We conducted extensive comparative experiments on five datasets by integrating them with some representative ABSA models. Results show that our methods can achieve performance gains and enhance the performance and generalization ability of ABSA models.",83d9593a65ae8548a37afd775f1b1660b5d7df6c,Semantic Scholar,,somewhat relevant,"The paper mentions 'Chain-of-Thought prompting' as a strategy to improve Text-to-Image In-Context Learning, directly indicating its relevance to prompt engineering." advanced prompting as a catalyst empowering large language models in the management of gastrointestinal cancers,"['J. Yuan', 'Peng Bao', 'Zi Chen', 'Mingze Yuan', 'Jie Zhao', 'Jiahua Pan', 'Yi Xie', 'Yanshuo Cao', 'Yakun Wang', 'Zhenghang Wang', 'Zhihao Lu', 'Xiaotian Zhang', 'Jian Li', 'Lei Ma', 'Yang Chen', 'Li Zhang', 'Lin Shen', 'Bin Dong']",https://www.the-innovation.org/data/article/export-pdf?id=64db4fd54228a72545780714,,,"Large Language Models' (LLMs) performance in healthcare can be significantly impacted by prompt engineering. However, the area of study remains relatively uncharted in gastrointestinal oncology until now. Our research delves into this unexplored territory, investigating the efficacy of varied prompting strategies, including simple prompts, templated prompts, in-context learning (ICL), and multi-round iterative questioning, for optimizing the performance of LLMs within a medical setting. We develop a comprehensive evaluation system to assess the performance of LLMs across multiple dimensions. This robust evaluation system ensures a thorough assessment of the LLMs' capabilities in the field of medicine. Our findings suggest a positive relationship between the comprehensiveness of the prompts and the LLMs' performance. Notably, the multi-round strategy, which is characterized by iterative question-and-answer rounds, consistently yields the best results. ICL, a strategy that capitalizes on interrelated contextual learning, also displays significant promise, surpassing the outcomes achieved with simpler prompts. The research underscores the potential of advanced prompt engineering and iterative learning approaches for boosting the applicability of LLMs in healthcare. We recommend that additional research be conducted to refine these strategies and investigate their potential integration, to truly harness the full potential of LLMs in medical applications. ",995b2f650f55de6077b87db6dadb01cecd86dbd7,Semantic Scholar,,highly relevant,"The paper focuses on adjusting prompt instructions to improve dialogue generation quality, directly tying to the area of hard prefix prompting." promptassisted relation fusion in knowledge graph acquisition,"['Xiaonan Jing', 'Julia M. Rayz']",https://figshare.com/articles/thesis/PROMPT-ASSISTED_RELATION_FUSION_IN_KNOWLEDGE_GRAPH_ACQUISITION/21687428/1/files/38460428.pdf,2023-10-01,,"This paper investigated how prompt-based learning techniques can assist with relation fusion in Knowledge Graph (KG) acquisition. We created a unsupervised framework to generate a KG from a real-world dataset. The framework incorporates prompting with knowledge entity metadata and generating predicate embeddings with the pretrained Masked Language Model (MLM) RoBERTa. Predicate embeddings were clustered to form conceptual groups and feature tokens were used to derive relation labels. In addition, we conducted a comparative study on the effects of different prompting templates. The resulting relation labels were evaluated by human annotators, which indicated that prompt-based learning, if applied appropriately, can help with deducing conceptualized relations. Our framework proposed a way to improve the quality of KGs acquired using traditional Relation Extraction (RE). It can also assist human experts effectively in semi-automated knowledge acquisition.",bcca9c8aefd11ab2a4e7e8998f3292d5483e51a2,Semantic Scholar,,somewhat relevant,"The paper focuses on using in-context learning with Large Language Models for generating metamorphic specifications, which involves the application of prompting techniques." navigating cultural chasms exploring and unlocking the cultural pov of texttoimage models,"['Mor Ventura', 'Eyal Ben-David', 'Anna Korhonen', 'Roi Reichart']",https://arxiv.org/pdf/2310.01929,2023-10-03,,"Text-To-Image (TTI) models, such as DALL-E and StableDiffusion, have demonstrated remarkable prompt-based image generation capabilities. Multilingual encoders may have a substantial impact on the cultural agency of these models, as language is a conduit of culture. In this study, we explore the cultural perception embedded in TTI models by characterizing culture across three hierarchical tiers: cultural dimensions, cultural domains, and cultural concepts. Based on this ontology, we derive prompt templates to unlock the cultural knowledge in TTI models, and propose a comprehensive suite of evaluation techniques, including intrinsic evaluations using the CLIP space, extrinsic evaluations with a Visual-Question-Answer (VQA) model and human assessments, to evaluate the cultural content of TTI-generated images. To bolster our research, we introduce the CulText2I dataset, derived from four diverse TTI models and spanning ten languages. Our experiments provide insights regarding Do, What, Which and How research questions about the nature of cultural encoding in TTI models, paving the way for cross-cultural applications of these models.",c811b7e98b755ab7d34baa466796d00a93f662e7,Semantic Scholar,,somewhat relevant,"The paper mentions the investigation of 'variations in prompts' as part of enhancing model performance and calibration, which is directly relevant to the topic of prompt engineering." grounded theory and collaborative design approach to disability storytelling on tiktok,['Morgan Lundy'],https://iopn.library.illinois.edu/journals/aliseacp/article/download/1378/1102,2023-09-29,,"This developing dissertation explores the use of TikTok as a platform for individual and collective storytelling and information creation practices – within a specific online health community of people experiencing painful, invisible, and difficult to diagnose central sensitivity syndromes (CSSs) – to understand and support these embodied, creative, and collective information behaviors. Ongoing data collection indicates that people with CSSs are using TikTok affordances to tell and scaffold complex micro-stories about their expertise and social experiences of disability: by employing iconographic elements to make disability visual; intimate cinematography; audio, visual, and community-specific mimetic options; and platform-specific novel feature use. The research design draws upon critical disabilities studies (CDS) sensitizing concepts. A constructivist grounded theory approach will be employed, by theoretically sampling TikTok micro-videos, their top comments, and, by the time of this presentation, conducting semi-structured interviews with CSS TikTok community members. This poster also discusses these preliminary results, as well as a novel initial sampling approach which addresses both the hashtag and algorithmic logics of the platform, and an implementation of feminist ethics of care in research methods. Then, three codesign workshops with individuals experiencing CSSs will develop creative storytelling materials that can be utilized in various contexts. These workshops promote the inclusion of disabled community members as co-designers and aim to co-design physical and digital storytelling resources such as prompts, templates, and TikTok features.These findings expand storytelling theory into the health domain, introduce and define algorithmically mediated online health communities, and promote critical disability studies perspectives in information science.",c8c63a9e67c65d78fdfa9ab7e6d94e23cd1ed3d1,Semantic Scholar,,highly relevant,"The paper introduces a dynamic in-context learning paradigm with tailored prompts for ChatGPT, directly linking to the topic of prompt engineering." questions about contracts prompt templates for structured answer generation,"['Adam Roegiest', 'Radha Chitta', 'Jonathan Donnelly', 'Maya Lash', 'A. Vtyurina', 'Francois Longtin']",https://aclanthology.org/2023.nllp-1.8.pdf,,,"Finding the answers to legal questions about specific clauses in contracts is an important analysis in many legal workflows (e.g., understanding market trends, due diligence, risk mitigation) but more important is being able to do this at scale. In this paper, we present an examination of using large language models to produce (partially) structured answers to legal questions; primarily in the form of multiple choice and multiple select. We first show that traditional semantic matching is unable to perform this task at acceptable accuracy and then show how question specific prompts can achieve reasonable accuracy across a range of generative models. Finally, we show that much of this effectiveness can be maintained when generalized prompt templates are used rather than question specific ones.",dcb42cf22c85cb8921f508340bf1643e5be24a65,Semantic Scholar,,highly relevant,"The paper discusses using three prompting strategies with GPT models for in-context learning, directly relating to the practice of prompt engineering." taxoprompt a promptbased generation method with taxonomic context for selfsupervised taxonomy expansion,"['Hongyuan Xu', 'Yunong Chen', 'Zichen Liu', 'Yanlong Wen', 'Xiaojie Yuan']",https://www.ijcai.org/proceedings/2022/0615.pdf,2022-07-01,,"Taxonomies are hierarchical classifications widely exploited to facilitate downstream natural language processing tasks. The taxonomy expansion task aims to incorporate emergent concepts into the existing taxonomies. Prior works focus on modeling the local substructure of taxonomies but neglect the global structure. In this paper, we propose TaxoPrompt, a framework that learns the global structure by prompt tuning with taxonomic context. Prompt tuning leverages a template to formulate downstream tasks into masked language model form for better distributed semantic knowledge use. To further infuse global structure knowledge into language models, we enhance the prompt template by exploiting the taxonomic context constructed by a variant of the random walk algorithm. Experiments on seven public benchmarks show that our proposed TaxoPrompt is effective and efficient in automatically expanding taxonomies and achieves state-of-the-art performance.",ea2fb89403ea1cd6af000e761e2f72eb7c150607,Semantic Scholar,,highly relevant,"The paper discusses the use of dedicated prompt designs for slot filling tasks with LLMs, which directly relates to the concept of hard prefix prompting in prompt engineering." aspectbased sentiment classification with sequential crossmodal semantic graph,"['Yufen Huang', 'Zhuo Chen', 'Wen Zhang', 'Jiaoyan Chen', 'Jeff Z. Pan', 'Zhen Yao', 'Yujie Xie', 'Hua-zeng Chen']",https://arxiv.org/pdf/2208.09417,,,"Multi-modal aspect-based sentiment classification (MABSC) is an emerging classification task that aims to classify the sentiment of a given target such as a mentioned entity in data with different modalities. In typical multi-modal data with text and image, previous approaches do not make full use of the fine-grained semantics of the image, especially in con-junction with the semantics of the text and do not fully con- sider modeling the relationship between fine-grained image information and target, which leads to insufficient use of im- age and inadequate to identify fine-grained aspects and opinions. To tackle these limitations, we propose a new frame- work SeqCSG including a method to construct sequential cross-modal semantic graphs and an encoder-decoder model. Specifically, we extract fine-grained information from the original image, image caption, and scene graph, and regard them as elements of the cross-modal semantic graph as well as tokens from texts. The cross-modal semantic graph is rep- resented as a sequence with a multi-modal visible matrix indicating relationships between elements. In order to effec- tively utilize the cross-modal semantic graph, we propose an encoder-decoder method with a target prompt template. Ex- perimental results show that our approach outperforms existing methods and achieves the state-of-the-art on two stan- dard datasets MABSC. Further analysis demonstrates the effectiveness of each component and our model can implicitly learn the correlation between the target and fine-grained information of the image.",fb7ed529fec665450925f9a75129cb69be83b67a,Semantic Scholar,,highly relevant,"The paper discusses constructing in-context example sets for triggering specific behaviors in language models, directly relating to the manipulation and optimization of prompts." sensitivity and robustness of large language models to prompt in japanese,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2305.08714,,,"Prompt Engineering has gained significant rel-evance in recent years, fueled by advance-ments in pre-trained and large language models. However, a critical issue has been iden-tified within this domain: the lack of sensitivity and robustness of these models towards Prompt Templates, particularly in lesser-studied languages such as Japanese. This paper explores this issue through a comprehensive evaluation of several representative Large Language Models (LLMs) and a widely-utilized pre-trained model(PLM), T5. These models are scrutinized using a benchmark dataset in Japanese, with the aim to assess and analyze the performance of the current multilingual models in this context. Our experimental results reveal startling discrepancies. A simple modification in the sentence structure of the Prompt Template led to a drastic drop in the accuracy of GPT-4 from 49.21 to 25.44. This observation underscores the fact that even the highly performance GPT-4 model encoun-ters significant stability issues when dealing with diverse Japanese prompt templates, ren-dering the consistency of the model’s output results questionable. In light of these findings, we conclude by proposing potential research trajectories to further enhance the devel-opment and performance of Large Language Models in their current stage.",ff77cc047f5e7a2fdf8563d05e1ba4b383e859a4,Semantic Scholar,,somewhat relevant,"The paper mentions the use of 'chain-of-thought prompting' which falls under prompt engineering, but its focus is on utilizing LLM for few-shot image classification and segmentation, not explicitly on the development or analysis of hard prefix prompts." malla demystifying realworld large language model integrated malicious services,"['Zilong Lin', 'Jian Cui', 'Xiaojing Liao', 'XiaoFeng Wang']",http://arxiv.org/pdf/2401.03315v1.pdf,2024-01-06,," The underground exploitation of large language models (LLMs) for maliciousservices (i.e., Malla) is witnessing an uptick, amplifying the cyber threatlandscape and posing questions about the trustworthiness of LLM technologies.However, there has been little effort to understand this new cybercrime, interms of its magnitude, impact, and techniques. In this paper, we conduct thefirst systematic study on 212 real-world Mallas, uncovering their proliferationin underground marketplaces and exposing their operational modalities. Ourstudy discloses the Malla ecosystem, revealing its significant growth andimpact on today's public LLM services. Through examining 212 Mallas, weuncovered eight backend LLMs used by Mallas, along with 182 prompts thatcircumvent the protective measures of public LLM APIs. We further demystify thetactics employed by Mallas, including the abuse of uncensored LLMs and theexploitation of public LLM APIs through jailbreak prompts. Our findings enablea better understanding of the real-world exploitation of LLMs bycybercriminals, offering insights into strategies to counteract thiscybercrime.",,arXiv,"['cs.cr', 'cs.ai']",somewhat relevant,"The paper focuses on the configuration of in-context sequences for enhancing In-Context Learning in LVLMs, which is closely related to the process of crafting efficient prompts, especially in the context of applications like VQA." using natural language explanations to improve robustness of incontext learning for natural language inference,"['Xuanli He', 'Yuxiang Wu', 'Oana-Maria Camburu', 'Pasquale Minervini', 'Pontus Stenetorp']",http://arxiv.org/pdf/2311.07556v1.pdf,2023-11-13,," Recent studies have demonstrated that large language models (LLMs) excel indiverse tasks through in-context learning (ICL) facilitated by task-specificprompts and examples. However, the existing literature shows that ICLencounters performance deterioration when exposed to adversarial inputs.Enhanced performance has been observed when ICL is augmented with naturallanguage explanations (NLEs) (we refer to it as X-ICL). Thus, this workinvestigates whether X-ICL can improve the robustness of LLMs on a suite ofseven adversarial and challenging natural language inference datasets.Moreover, we introduce a new approach to X-ICL by prompting an LLM (ChatGPT inour case) with few human-generated NLEs to produce further NLEs (we call itChatGPT few-shot), which we show superior to both ChatGPT zero-shot andhuman-generated NLEs alone. We evaluate five popular LLMs (GPT3.5-turbo,LLaMa2, Vicuna, Zephyr, Mistral) and show that X-ICL with ChatGPT few-shotyields over 6% improvement over ICL. Furthermore, while prompt selectionstrategies were previously shown to significantly improve ICL onin-distribution test sets, we show that these strategies do not match theefficacy of the X-ICL paradigm in robustness-oriented evaluations.",,arXiv,['cs.cl'],somewhat relevant,"The paper discusses in-context learning with image and text prompts in Large Multi-modal Models (LMMs), which relates to prompting but is more focused on multimodal contexts and pre-filtering methods rather than hard prefix prompt engineering directly." algo synthesizing algorithmic programs with llmgenerated oracle verifiers,"['Kexun Zhang', 'Danqing Wang', 'Jingtao Xia', 'William Yang Wang', 'Lei Li']",http://arxiv.org/pdf/2305.14591v3.pdf,2023-05-24,," Large language models (LLMs) excel at implementing code from functionalitydescriptions but struggle with algorithmic problems that require not onlyimplementation but also identification of the suitable algorithm. Moreover,LLM-generated programs lack guaranteed correctness and require humanverification. To address these challenges, we propose ALGO, a framework thatsynthesizes Algorithmic programs with LLM-Generated Oracles to guide thegeneration and verify their correctness. ALGO first generates a referenceoracle by prompting an LLM to exhaustively enumerate all the combinations ofrelevant variables. This oracle is then utilized to guide an arbitrary searchstrategy in exploring the algorithm space and to verify the synthesizedalgorithms. Our study shows that the LLM-generated oracles are correct for 88%of the cases. With the oracles as verifiers, ALGO can be integrated with anyexisting code generation model in a model-agnostic manner to enhance itsperformance. Experiments show that when equipped with ALGO, we achieve an 8xbetter one-submission pass rate over the Codex model and a 2.6x betterone-submission pass rate over CodeT, the current state-of-the-art model onCodeContests. We can also get 1.3x better pass rate over the ChatGPT CodeInterpreter on unseen problems. The problem set we used for testing, theprompts we used, the verifier and solution programs, and the test casesgenerated by ALGO are available at https://github.com/zkx06111/ALGO.",,arXiv,"['cs.cl', 'cs.se']",highly relevant,The paper includes a track on 'prompt tuning' which directly relates to prompt engineering practices. multistage collaborative knowledge distillation from large language models for semisupervised sequence generation,"['Jiachen Zhao', 'Wenlong Zhao', 'Andrew Drozdov', 'Benjamin Rozonoyer', 'Md Arafat Sultan', 'Jay-Yoon Lee', 'Mohit Iyyer', 'Andrew McCallum']",http://arxiv.org/pdf/2311.08640v2.pdf,2023-11-15,," We study semi-supervised sequence generation tasks where labeled data are tooscarce to effectively finetune a model and at the same time few-shot promptingof a large language model (LLM) has suboptimal performance. This happens when atask, such as parsing, is expensive to annotate and also unfamiliar to apretrained LLM. In this paper, we present a discovery that student modelsdistilled from an in-context learned LLM can often generalize better than theirteacher on such tasks. Leveraging this finding, we present a new method --multistage collaborative knowledge distillation from an LLM (MCKD) -- for suchtasks. MCKD first few-shot prompts an LLM to produce pseudolabels for unlabeleddata. At each intermediate knowledge distillation (KD) stage, a new pair ofstudents is trained on disjoint partitions of the pseudolabeled data. Eachstudent then produces new and improved pseudolabels for its unseen partition tobe used in the next stage of distillation. We demonstrate the advantage ofmultistage cross-partition labeling on several syntactic and semantic parsingtasks. On CRAFT biomedical parsing, for example, 3-stage MCKD with 50 labeledexamples outperforms the prompted LLM and vanilla KD by 7.5% and 3.7% parsingF1, respectively, and matches the performance of supervised finetuning with 500examples.",,arXiv,"['cs.cl', 'cs.lg']",highly relevant,"The paper discusses In-Context Learning (ICL) using prompts for debiasing, which directly relates to prompt engineering." making large language models better knowledge miners for online marketing with progressive prompting augmentation,"['Chunjing Gan', 'Dan Yang', 'Binbin Hu', 'Ziqi Liu', 'Yue Shen', 'Zhiqiang Zhang', 'Jinjie Gu', 'Jun Zhou', 'Guannan Zhang']",http://arxiv.org/pdf/2312.05276v1.pdf,2023-12-08,," Nowadays, the rapid development of mobile economy has promoted theflourishing of online marketing campaigns, whose success greatly hinges on theefficient matching between user preferences and desired marketing campaignswhere a well-established Marketing-oriented Knowledge Graph (dubbed as MoKG)could serve as the critical ""bridge"" for preference propagation. In this paper,we seek to carefully prompt a Large Language Model (LLM) with domain-levelknowledge as a better marketing-oriented knowledge miner for marketing-orientedknowledge graph construction, which is however non-trivial, suffering fromseveral inevitable issues in real-world marketing scenarios, i.e.,uncontrollable relation generation of LLMs,insufficient prompting ability of asingle prompt, the unaffordable deployment cost of LLMs. To this end, wepropose PAIR, a novel Progressive prompting Augmented mIning fRamework forharvesting marketing-oriented knowledge graph with LLMs. In particular, wereduce the pure relation generation to an LLM based adaptive relation filteringprocess through the knowledge-empowered prompting technique. Next, we steerLLMs for entity expansion with progressive prompting augmentation,followed by areliable aggregation with comprehensive consideration of both self-consistencyand semantic relatedness. In terms of online serving, we specialize in a smalland white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-qualitycorpus provided by a strong teacher-LLM. Extensive experiments and practicalapplications in audience targeting verify the effectiveness of the proposed(Light)PAIR.",,arXiv,"['cs.ai', 'cs.lg']",somewhat relevant,"The paper focuses on using natural language prompting for controlling speaker identity and style in text-to-speech models, which aligns with the topic of prompt engineering." a strong baseline for temporal videotext alignment,"['Zeqian Li', 'Qirui Chen', 'Tengda Han', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie']",http://arxiv.org/pdf/2312.14055v1.pdf,2023-12-21,," In this paper, we consider the problem of temporally aligning the video andtexts from instructional videos, specifically, given a long-term video, andassociated text sentences, our goal is to determine their correspondingtimestamps in the video. To this end, we establish a simple, yet strong modelthat adopts a Transformer-based architecture with all texts as queries,iteratively attending to the visual features, to infer the optimal timestamp.We conduct thorough experiments to investigate: (i) the effect of upgrading ASRsystems to reduce errors from speech recognition, (ii) the effect of variousvisual-textual backbones, ranging from CLIP to S3D, to the more recentInternVideo, (iii) the effect of transforming noisy ASR transcripts intodescriptive steps by prompting a large language model (LLM), to summarize thecore activities within the ASR transcript as a new training dataset. As aresult, our proposed simple model demonstrates superior performance on bothnarration alignment and procedural step grounding tasks, surpassing existingstate-of-the-art methods by a significant margin on three public benchmarks,namely, 9.3% on HT-Step, 3.4% on HTM-Align and 4.7% on CrossTask. We believethe proposed model and dataset with descriptive steps can be treated as astrong baseline for future research in temporal video-text alignment. Allcodes, models, and the resulting dataset will be publicly released to theresearch community.",,arXiv,['cs.cv'],somewhat relevant,"The abstract mentions 'token-wise prompting' indicating the use of prompting techniques, but it is focused on utilizing large language models for time series forecasting rather than directly discussing or analyzing prompt engineering methods." maatphor automated variant analysis for prompt injection attacks,"['Ahmed Salem', 'Andrew Paverd', 'Boris Köpf']",http://arxiv.org/pdf/2312.11513v1.pdf,2023-12-12,," Prompt injection has emerged as a serious security threat to large languagemodels (LLMs). At present, the current best-practice for defending againstnewly-discovered prompt injection techniques is to add additional guardrails tothe system (e.g., by updating the system prompt or using classifiers on theinput and/or output of the model.) However, in the same way that variants of apiece of malware are created to evade anti-virus software, variants of a promptinjection can be created to evade the LLM's guardrails. Ideally, when a newprompt injection technique is discovered, candidate defenses should be testednot only against the successful prompt injection, but also against possiblevariants. In this work, we present, a tool to assist defenders in performing automatedvariant analysis of known prompt injection attacks. This involves solving twomain challenges: (1) automatically generating variants of a given promptaccording, and (2) automatically determining whether a variant was effectivebased only on the output of the model. This tool can also assist in generatingdatasets for jailbreak and prompt injection attacks, thus overcoming thescarcity of data in this domain. We evaluate Maatphor on three different types of prompt injection tasks.Starting from an ineffective (0%) seed prompt, Maatphor consistently generatesvariants that are at least 60% effective within the first 40 iterations.",,arXiv,"['cs.cr', 'cs.ai', 'cs.lg']",highly relevant,"The paper discusses the optimization of in-context examples (ICE) and the effect of task-specific instructions within prompts, directly relating to the practice of prompt engineering." signedprompt a new approach to prevent prompt injection attacks against llmintegrated applications,['Xuchen Suo'],http://arxiv.org/pdf/2401.07612v1.pdf,2024-01-15,," The critical challenge of prompt injection attacks in Large Language Models(LLMs) integrated applications, a growing concern in the ArtificialIntelligence (AI) field. Such attacks, which manipulate LLMs through naturallanguage inputs, pose a significant threat to the security of theseapplications. Traditional defense strategies, including output and inputfiltering, as well as delimiter use, have proven inadequate. This paperintroduces the 'Signed-Prompt' method as a novel solution. The study involvessigning sensitive instructions within command segments by authorized users,enabling the LLM to discern trusted instruction sources. The paper presents acomprehensive analysis of prompt injection attack patterns, followed by adetailed explanation of the Signed-Prompt concept, including its basicarchitecture and implementation through both prompt engineering and fine-tuningof LLMs. Experiments demonstrate the effectiveness of the Signed-Prompt method,showing substantial resistance to various types of prompt injection attacks,thus validating its potential as a robust defense strategy in AI security.",,arXiv,"['cs.cr', 'cs.ai']",somewhat relevant,"The paper focuses on optimizing instructional texts through In-Context Learning in MMLMs, which is related to prompt engineering." look before you leap a universal emergent decomposition of retrieval tasks in language models,"['Alexandre Variengien', 'Eric Winsor']",http://arxiv.org/pdf/2312.10091v1.pdf,2023-12-13,," When solving challenging problems, language models (LMs) are able to identifyrelevant information from long and complicated contexts. To study how LMs solveretrieval tasks in diverse situations, we introduce ORION, a collection ofstructured retrieval tasks spanning six domains, from text understanding tocoding. Each task in ORION can be represented abstractly by a request (e.g. aquestion) that retrieves an attribute (e.g. the character name) from a context(e.g. a story). We apply causal analysis on 18 open-source language models withsizes ranging from 125 million to 70 billion parameters. We find that LMsinternally decompose retrieval tasks in a modular way: middle layers at thelast token position process the request, while late layers retrieve the correctentity from the context. After causally enforcing this decomposition, modelsare still able to solve the original task, preserving 70% of the originalcorrect token probability in 98 of the 106 studied model-task pairs. We connectour macroscopic decomposition with a microscopic description by performing afine-grained case study of a question-answering task on Pythia-2.8b. Buildingon our high-level understanding, we demonstrate a proof of concept applicationfor scalable internal oversight of LMs to mitigate prompt-injection whilerequiring human supervision on only a single input. Our solution improvesaccuracy drastically (from 15.5% to 97.5% on Pythia-12b). This work presentsevidence of a universal emergent modular processing of tasks across varieddomains and models and is a pioneering effort in applying interpretability forscalable internal oversight of LMs.",,arXiv,"['cs.ir', 'cs.cl', 'cs.lg']",highly relevant,"The paper explicitly details using in-context learning with LLMs for layout generation, implying the use of prompts to guide the model's output." attackeval how to evaluate the effectiveness of jailbreak attacking on large language models,"['Dong shu', 'Mingyu Jin', 'Suiyuan Zhu', 'Beichen Wang', 'Zihao Zhou', 'Chong Zhang', 'Yongfeng Zhang']",http://arxiv.org/pdf/2401.09002v2.pdf,2024-01-17,," In our research, we pioneer a novel approach to evaluate the effectiveness ofjailbreak attacks on Large Language Models (LLMs), such as GPT-4 and LLaMa2,diverging from traditional robustness-focused binary evaluations. Our studyintroduces two distinct evaluation frameworks: a coarse-grained evaluation anda fine-grained evaluation. Each framework, using a scoring range from 0 to 1,offers a unique perspective, enabling a more comprehensive and nuancedevaluation of attack effectiveness and empowering attackers to refine theirattack prompts with greater understanding. Furthermore, we have developed acomprehensive ground truth dataset specifically tailored for jailbreak tasks.This dataset not only serves as a crucial benchmark for our current study butalso establishes a foundational resource for future research, enablingconsistent and comparative analyses in this evolving field. Upon meticulouscomparison with traditional evaluation methods, we discovered that ourevaluation aligns with the baseline's trend while offering a more profound anddetailed assessment. We believe that by accurately evaluating the effectivenessof attack prompts in the Jailbreak task, our work lays a solid foundation forassessing a wider array of similar or even more complex tasks in the realm ofprompt injection, potentially revolutionizing this field.",,arXiv,['cs.cl'],highly relevant,"The paper introduces the Heuristic-Driven Link-of-Analogy (HD-LoA) prompting method for in-context learning, directly relating to prompt engineering techniques." dialogue for prompting a policygradientbased discrete prompt generation for fewshot learning,"['Chengzhengxu Li', 'Xiaoming Liu', 'Yichen Wang', 'Duyi Li', 'Yu Lan', 'Chao Shen']",http://arxiv.org/pdf/2308.07272v2.pdf,2023-08-14,," Prompt-based pre-trained language models (PLMs) paradigm have succeededsubstantially in few-shot natural language processing (NLP) tasks. However,prior discrete prompt optimization methods require expert knowledge to designthe base prompt set and identify high-quality prompts, which is costly,inefficient, and subjective. Meanwhile, existing continuous prompt optimizationmethods improve the performance by learning the ideal prompts through thegradient information of PLMs, whose high computational cost, and lowreadability and generalizability are often concerning. To address the researchgap, we propose a Dialogue-comprised Policy-gradient-based Discrete PromptOptimization ($DP_2O$) method. We first design a multi-round dialogue alignmentstrategy for readability prompt set generation based on GPT-4. Furthermore, wepropose an efficient prompt screening metric to identify high-quality promptswith linear complexity. Finally, we construct a reinforcement learning (RL)framework based on policy gradients to match the prompts to inputs optimally.By training a policy network with only 0.67% of the PLM parameter size on thetasks in the few-shot setting, $DP_2O$ outperforms the state-of-the-art (SOTA)method by 1.52% in accuracy on average on four open-source datasets. Moreover,subsequent experiments also demonstrate that $DP_2O$ has good universality,robustness, and generalization ability.",,arXiv,"['cs.lg', 'cs.cl']",somewhat relevant,"The paper mentions using a prompting strategy for action anticipation with language models, indicating relevance to prompt engineering." evolutionary multiobjective optimization of large language model prompts for balancing sentiments,"['Jill Baumann', 'Oliver Kramer']",http://arxiv.org/pdf/2401.09862v1.pdf,2024-01-18,," The advent of large language models (LLMs) such as ChatGPT has attractedconsiderable attention in various domains due to their remarkable performanceand versatility. As the use of these models continues to grow, the importanceof effective prompt engineering has come to the fore. Prompt optimizationemerges as a crucial challenge, as it has a direct impact on model performanceand the extraction of relevant information. Recently, evolutionary algorithms(EAs) have shown promise in addressing this issue, paving the way for noveloptimization strategies. In this work, we propose a evolutionarymulti-objective (EMO) approach specifically tailored for prompt optimizationcalled EMO-Prompts, using sentiment analysis as a case study. We use sentimentanalysis capabilities as our experimental targets. Our results demonstrate thatEMO-Prompts effectively generates prompts capable of guiding the LLM to producetexts embodying two conflicting emotions simultaneously.",,arXiv,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.lg']",highly relevant,"The paper discusses designing a novel prompting method named Decision-Tree-of-Thought (DToT) for improving LLMs' toxic content detection, which is directly related to prompt engineering." prompt2nerfpil fast nerf generation via pretrained implicit latent,"['Jianmeng Liu', 'Yuyao Zhang', 'Zeyuan Meng', 'Yu-Wing Tai', 'Chi-Keung Tang']",http://arxiv.org/pdf/2312.02568v1.pdf,2023-12-05,," This paper explores promptable NeRF generation (e.g., text prompt or singleimage prompt) for direct conditioning and fast generation of NeRF parametersfor the underlying 3D scenes, thus undoing complex intermediate steps whileproviding full 3D generation with conditional control. Unlike previousdiffusion-CLIP-based pipelines that involve tedious per-prompt optimizations,Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a singleforward pass, leveraging a pre-trained implicit latent space of NeRFparameters. Furthermore, in zero-shot tasks, our experiments demonstrate thatthe NeRFs produced by our method serve as semantically informativeinitializations, significantly accelerating the inference process of existingprompt-to-NeRF methods. Specifically, we will show that our approach speeds upthe text-to-NeRF model DreamFusion and the 3D reconstruction speed of theimage-to-NeRF method Zero-1-to-3 by 3 to 5 times.",,arXiv,['cs.cv'],highly relevant,"The paper discusses the use of a novel prompting method with LLMs for Temporal Sentence Grounding in videos, explicitly mentioning the design of Boundary-Perceptive Prompting and the enhancement of LLM task understanding through prompts." unidcp unifying multiple medical visionlanguage tasks via dynamic crossmodal learnable prompts,"['Chenlu Zhan', 'Yufei Zhang', 'Yu Lin', 'Gaoang Wang', 'Hongwei Wang']",http://arxiv.org/pdf/2312.11171v1.pdf,2023-12-18,," Medical vision-language pre-training (Med-VLP) models have recentlyaccelerated the fast-growing medical diagnostics application. However, mostMed-VLP models learn task-specific representations independently from scratch,thereby leading to great inflexibility when they work across multiplefine-tuning tasks. In this work, we propose UniDCP, a Unified medicalvision-language model with Dynamic Cross-modal learnable Prompts, which can beplastically applied to multiple medical vision-language tasks. Specifically, weexplicitly construct a unified framework to harmonize diverse inputs frommultiple pretraining tasks by leveraging cross-modal prompts for unification,which accordingly can accommodate heterogeneous medical fine-tuning tasks.Furthermore, we conceive a dynamic cross-modal prompt optimizing strategy thatoptimizes the prompts within the shareable space for implicitly processing theshareable clinic knowledge. UniDCP is the first Med-VLP model capable ofperforming all 8 medical uni-modal and cross-modal tasks over 14 correspondingdatasets, consistently yielding superior results over diverse state-of-the-artmethods.",,arXiv,"['cs.cv', 'cs.ai']",somewhat relevant,"The paper focuses on fine-tuning a large language model for Portuguese prompts, which implies post-training prompting techniques for language tasks." atom amortized texttomesh using 2d diffusion,"['Guocheng Qian', 'Junli Cao', 'Aliaksandr Siarohin', 'Yash Kant', 'Chaoyang Wang', 'Michael Vasilkovsky', 'Hsin-Ying Lee', 'Yuwei Fang', 'Ivan Skorokhodov', 'Peiye Zhuang', 'Igor Gilitschenski', 'Jian Ren', 'Bernard Ghanem', 'Kfir Aberman', 'Sergey Tulyakov']",http://arxiv.org/pdf/2402.00867v1.pdf,2024-02-01,," We introduce Amortized Text-to-Mesh (AToM), a feed-forward text-to-meshframework optimized across multiple text prompts simultaneously. In contrast toexisting text-to-3D methods that often entail time-consuming per-promptoptimization and commonly output representations other than polygonal meshes,AToM directly generates high-quality textured meshes in less than 1 second witharound 10 times reduction in the training cost, and generalizes to unseenprompts. Our key idea is a novel triplane-based text-to-mesh architecture witha two-stage amortized optimization strategy that ensures stable training andenables scalability. Through extensive experiments on various promptbenchmarks, AToM significantly outperforms state-of-the-art amortizedapproaches with over 4 times higher accuracy (in DF415 dataset) and producesmore distinguishable and higher-quality 3D outputs. AToM demonstrates stronggeneralizability, offering finegrained 3D assets for unseen interpolatedprompts without further optimization during inference, unlike per-promptsolutions.",,arXiv,['cs.cv'],somewhat relevant,"The paper focuses on enhancing large language models through in-context learning, a method that involves providing few-shot examples, which is a form of prompting." learning to rewrite prompts for personalized text generation,"['Cheng Li', 'Mingyang Zhang', 'Qiaozhu Mei', 'Weize Kong', 'Michael Bendersky']",http://arxiv.org/pdf/2310.00152v2.pdf,2023-09-29,," Facilitated by large language models (LLMs), personalized text generation hasbecome a rapidly growing research direction. Most existing studies focus ondesigning specialized models for a particular domain, or they requirefine-tuning the LLMs to generate personalized text. We consider a typicalscenario in which the large language model, which generates personalizedoutput, is frozen and can only be accessed through APIs. Under this constraint,all one can do is to improve the input text (i.e., text prompts) sent to theLLM, a procedure that is usually done manually. In this paper, we propose anovel method to automatically revise prompts for personalized text generation.The proposed method takes the initial prompts generated by a state-of-the-art,multistage framework for personalized generation and rewrites a few criticalcomponents that summarize and synthesize the personal context. The promptrewriter employs a training paradigm that chains together supervised learning(SL) and reinforcement learning (RL), where SL reduces the search space of RLand RL facilitates end-to-end training of the rewriter. Using datasets fromthree representative domains, we demonstrate that the rewritten promptsoutperform both the original prompts and the prompts optimized via supervisedlearning or reinforcement learning alone. In-depth analysis of the rewrittenprompts shows that they are not only human readable, but also able to guidemanual revision of prompts when there is limited resource to employreinforcement learning to train the prompt rewriter, or when it is costly todeploy an automatic prompt rewriter for inference.",,arXiv,['cs.cl'],highly relevant,"The paper introduces POMP, a method to prompt LLMs with a dynamic, sampling-based graph of multiple auxiliary languages to improve translations, indicating its focus on prompt engineering." a systematic survey of prompt engineering in large language models techniques and applications,"['Pranab Sahoo', 'Ayush Kumar Singh', 'Sriparna Saha', 'Vinija Jain', 'Samrat Mondal', 'Aman Chadha']",http://arxiv.org/pdf/2402.07927v1.pdf,2024-02-05,," Prompt engineering has emerged as an indispensable technique for extendingthe capabilities of large language models (LLMs) and vision-language models(VLMs). This approach leverages task-specific instructions, known as prompts,to enhance model efficacy without modifying the core model parameters. Ratherthan updating the model parameters, prompts allow seamless integration ofpre-trained models into downstream tasks by eliciting desired model behaviorssolely based on the given prompt. Prompts can be natural language instructionsthat provide context to guide the model or learned vector representations thatactivate relevant knowledge. This burgeoning field has enabled success acrossvarious applications, from question-answering to commonsense reasoning.However, there remains a lack of systematic organization and understanding ofthe diverse prompt engineering methods and techniques. This survey paperaddresses the gap by providing a structured overview of recent advancements inprompt engineering, categorized by application area. For each promptingapproach, we provide a summary detailing the prompting methodology, itsapplications, the models involved, and the datasets utilized. We also delveinto the strengths and limitations of each approach and include a taxonomydiagram and table summarizing datasets, models, and critical points of eachprompting technique. This systematic analysis enables a better understanding ofthis rapidly developing field and facilitates future research by illuminatingopen challenges and opportunities for prompt engineering.",,arXiv,"['cs.ai', 'cs.cl', 'cs.hc']",highly relevant,"The paper focuses on using a prompting approach to improve the adaptability of Large Language Models to Electronic Health Records data, making it relevant to the topic of prompt engineering." towards goaloriented large language model prompting a survey,"['Haochen Li', 'Jonathan Leung', 'Zhiqi Shen']",http://arxiv.org/pdf/2401.14043v1.pdf,2024-01-25,," Large Language Models (LLMs) have shown prominent performance in variousdownstream tasks in which prompt engineering plays a pivotal role in optimizingLLMs' performance. This paper, not as an overview of current prompt engineeringmethods, aims to highlight the limitation of designing prompts while holding ananthropomorphic assumption that expects LLMs to think like humans. From ourreview of 35 representative studies, we demonstrate that a goal-oriented promptformulation, which guides LLMs to follow established human logical thinking,significantly improves the performance of LLMs. Furthermore, We introduce anovel taxonomy that categorizes goal-oriented prompting methods into fiveinterconnected stages and we demonstrate the broad applicability of ourframework by summarizing ten applicable tasks. With four future directionsproposed, we hope to further emphasize and promote goal-oriented promptengineering.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The abstract mentions designing a set of system prompts for personality generation, which directly relates to prompt engineering." program decomposition and translation with static analysis,['Ali Reza Ibrahimzada'],http://arxiv.org/pdf/2401.12412v1.pdf,2024-01-22,," The rising popularity of Large Language Models (LLMs) has motivated exploringtheir use in code-related tasks. Code LLMs with more than millions ofparameters are trained on a massive amount of code in different ProgrammingLanguages (PLs). Such models are used for automating various SoftwareEngineering (SE) tasks using prompt engineering. However, given the very largesize of industry-scale project files, a major issue of these LLMs is theirlimited context window size, motivating the question of ""Can these LLMs processvery large files and can we effectively perform prompt engineering?"". Codetranslation aims to convert source code from one PL to another. In this work,we assess the effect of method-level program decomposition on context window ofLLMs and investigate how this approach can enable translation of very largefiles which originally could not be done due to out-of-context issue. Ourobservations from 20 well-known java projects and approximately 60K methodssuggest that method-level program decomposition significantly improves thelimited context window problem of LLMs by 99.5%. Furthermore, our empiricalanalysis indicate that with method-level decomposition, each input fragment onaverage only consumes 5% of the context window, leaving more context space forprompt engineering and the output. Finally, we investigate the effectiveness ofa Call Graph (CG) approach for translating very large files when doingmethod-level program decomposition.",,arXiv,['cs.se'],highly relevant,"The paper mentions the use of test cases from LogicAsker to design demonstration examples for in-context learning which effectively improves LLMs, indicating the use of prompts for enhancing model capabilities." adarefiner refining decisions of language models with adaptive feedback,"['Wanpeng Zhang', 'Zongqing Lu']",http://arxiv.org/pdf/2309.17176v2.pdf,2023-09-29,," Large Language Models (LLMs) have demonstrated significant success acrossvarious domains. However, their application in complex decision-making tasksfrequently necessitates intricate prompt engineering or fine-tuning, leading tochallenges in unseen downstream tasks and heavy demands on computationalresources. Meanwhile, Reinforcement Learning (RL) has been recognized aseffective in decision-making problems but struggles in environments with sparserewards, such as open-world games. To overcome these challenges, we introduceAdaRefiner, a novel framework designed to enhance the synergy between LLMs andRL feedback. The key component of AdaRefiner is a lightweight Adapter LanguageModel (LM), which automatically refines task comprehension based on feedbackfrom RL agents. This method mitigates the need for intricate prompt engineeringand intensive LLM fine-tuning while maintaining the LLMs' generalizationabilities and enhancing their decision-making capabilities in downstream tasks.Empirical evaluations of AdaRefiner on 22 diverse tasks within the open-worldgame Crafter have demonstrated its superior effectiveness, especially inguiding agents towards higher-level and common-sense skills. Our work makescontributions to the automatic self-refinement of LLMs with RL feedback,offering a more adaptable and efficient solution for complex decision-makingproblems.",,arXiv,"['cs.ai', 'cs.cl']",somewhat relevant,"The paper mentions the integration of instruction prompts with retrieval-augmented generation (RAG) to enhance LLMs in the medical domain, indicating relevance to prompt engineering." large language models and prompt engineering for biomedical query focused multidocument summarisation,['Diego Mollá'],http://arxiv.org/pdf/2311.05169v1.pdf,2023-11-09,," This paper reports on the use of prompt engineering and GPT-3.5 forbiomedical query-focused multi-document summarisation. Using GPT-3.5 andappropriate prompts, our system achieves top ROUGE-F1 results in the task ofobtaining short-paragraph-sized answers to biomedical questions in the 2023BioASQ Challenge (BioASQ 11b). This paper confirms what has been observed inother domains: 1) Prompts that incorporated few-shot samples generally improvedon their counterpart zero-shot variants; 2) The largest improvement wasachieved by retrieval augmented generation. The fact that these prompts allowour top runs to rank within the top two runs of BioASQ 11b demonstrate thepower of using adequate prompts for Large Language Models in general, andGPT-3.5 in particular, for query-focused summarisation.",,arXiv,['cs.cl'],somewhat relevant,The abstract mentions the use of 'prompting' as a technique for incorporating glosses and synonyms which indicates the application of prompt engineering in their methodology. beautifulprompt towards automatic prompt engineering for texttoimage synthesis,"['Tingfeng Cao', 'Chengyu Wang', 'Bingyan Liu', 'Ziheng Wu', 'Jinhui Zhu', 'Jun Huang']",http://arxiv.org/pdf/2311.06752v1.pdf,2023-11-12,," Recently, diffusion-based deep generative models (e.g., Stable Diffusion)have shown impressive results in text-to-image synthesis. However, currenttext-to-image models often require multiple passes of prompt engineering byhumans in order to produce satisfactory results for real-world applications. Wepropose BeautifulPrompt, a deep generative model to produce high-qualityprompts from very simple raw descriptions, which enables diffusion-based modelsto generate more beautiful images. In our work, we first fine-tuned theBeautifulPrompt model over low-quality and high-quality collecting promptpairs. Then, to ensure that our generated prompts can generate more beautifulimages, we further propose a Reinforcement Learning with Visual AI Feedbacktechnique to fine-tune our model to maximize the reward values of the generatedprompts, where the reward values are calculated based on the PickScore and theAesthetic Scores. Our results demonstrate that learning from visual AI feedbackpromises the potential to improve the quality of generated prompts and imagessignificantly. We further showcase the integration of BeautifulPrompt to acloud-native AI platform to provide better text-to-image generation service inthe cloud.",,arXiv,['cs.cl'],somewhat relevant,"The paper devises a prompting template for generating user and item representations, which indicates utilization of prompt engineering in the context of recommender systems." on the discussion of large language models symmetry of agents and interplay with prompts,"['Qineng Wang', 'Zihao Wang', 'Ying Su', 'Yangqiu Song']",http://arxiv.org/pdf/2311.07076v1.pdf,2023-11-13,," Two ways has been discussed to unlock the reasoning capability of a largelanguage model. The first one is prompt engineering and the second one is tocombine the multiple inferences of large language models, or the multi-agentdiscussion. Theoretically, this paper justifies the multi-agent discussionmechanisms from the symmetry of agents. Empirically, this paper reports theempirical results of the interplay of prompts and discussion mechanisms,revealing the empirical state-of-the-art performance of complex multi-agentmechanisms can be approached by carefully developed prompt engineering. Thispaper also proposes a scalable discussion mechanism based on conquer and merge,providing a simple multi-agent discussion solution with simple prompts butstate-of-the-art performance.",,arXiv,['cs.cl'],highly relevant,"The paper describes utilizing designed prompt templates for a generation-based method in speaker identification, making it directly relevant to hard prefix prompting." neuroprompts an adaptive framework to optimize prompts for texttoimage generation,"['Shachar Rosenman', 'Vasudev Lal', 'Phillip Howard']",http://arxiv.org/pdf/2311.12229v1.pdf,2023-11-20,," Despite impressive recent advances in text-to-image diffusion models,obtaining high-quality images often requires prompt engineering by humans whohave developed expertise in using them. In this work, we present NeuroPrompts,an adaptive framework that automatically enhances a user's prompt to improvethe quality of generations produced by text-to-image models. Our frameworkutilizes constrained text decoding with a pre-trained language model that hasbeen adapted to generate prompts similar to those produced by human promptengineers. This approach enables higher-quality text-to-image generations andprovides user control over stylistic features via constraint set specification.We demonstrate the utility of our framework by creating an interactiveapplication for prompt enhancement and image generation using Stable Diffusion.Additionally, we conduct experiments utilizing a large dataset ofhuman-engineered prompts for text-to-image generation and show that ourapproach automatically produces enhanced prompts that result in superior imagequality. We make our code, a screencast video demo and a live demo instance ofNeuroPrompts publicly available.",,arXiv,['cs.ai'],somewhat relevant,"The abstract mentions 'prompt templating' as part of the framework's features, indicating a direct relation to prompt engineering." memorycompanion a smart healthcare solution to empower efficient alzheimer's care via unleashing generative ai,"['Lifei Zheng', 'Yeonie Heo', 'Yi Fang']",http://arxiv.org/pdf/2311.14730v1.pdf,2023-11-20,," With the rise of Large Language Models (LLMs), notably characterized by GPTframeworks, there emerges a catalyst for novel healthcare applications. Earlieriterations of chatbot caregivers, though existent, have yet to achieve adimension of human-like authenticity. This paper unveils `MemoryCompanion' apioneering digital health solution explicitly tailored for Alzheimer's disease(AD) patients and their caregivers. Drawing upon the nuances of GPT technologyand prompt engineering, MemoryCompanion manifests a personalized caregivingparadigm, fostering interactions via voice-cloning and talking-face mechanismsthat resonate with the familiarity of known companions. Using advancedprompt-engineering, the system intricately adapts to each patient's distinctprofile, curating its content and communication style accordingly. Thisapproach strives to counteract prevalent issues of social isolation andloneliness frequently observed in AD demographics. Our methodology, grounded inits innovative design, addresses both the caregiving and technologicalchallenges intrinsic to this domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.lg']",highly relevant,"The paper explicitly discusses the use of prompting methods, particularly code prompts, to improve reasoning in LLMs, which aligns with the study of prompt engineering." devbots can codesign apis,['Vinicius Soares Silva Marques'],http://arxiv.org/pdf/2312.05733v1.pdf,2023-12-10,," DevBots are automated tools that perform various tasks in order to supportsoftware development. They are a growing trend and have been used inrepositories to automate repetitive tasks, as code generators, and ascollaborators in eliciting requirements and defining architectures. In thisstudy, we analyzed 24 articles to investigate the state of the art of usingDevBots in software development, trying to understand their characteristics,identify use cases, learn the relationship between DevBots and conversationalsoftware development, and discuss how prompt engineering can enablecollaboration between human developers and bots. Additionally, we identified agap to address by applying prompt engineering to collaborative API designbetween human designers and DevBots and proposed an experiment to assess whatapproach, between using Retrieval Augmented Generation or not, is moresuitable. Our conclusion is that DevBots can collaborate with human APIdesigners, but the two approaches have advantages and disadvantages.",,arXiv,"['cs.se', 'cs.ai', 'cs.hc']",highly relevant,"The paper uses prompt-based grounded action transformation with LLMs for Traffic Signal Control tasks, indicating a direct application of hard prefix prompting in a Reinforcement Learning scenario." ssp a simple and safe automatic prompt engineering method towards realistic image synthesis on lvm,"['Weijin Cheng', 'Jianzhi Liu', 'Jiawen Deng', 'Fuji Ren']",http://arxiv.org/pdf/2401.01128v1.pdf,2024-01-02,," Recently, text-to-image (T2I) synthesis has undergone significantadvancements, particularly with the emergence of Large Language Models (LLM)and their enhancement in Large Vision Models (LVM), greatly enhancing theinstruction-following capabilities of traditional T2I models. Nevertheless,previous methods focus on improving generation quality but introduce unsafefactors into prompts. We explore that appending specific camera descriptions toprompts can enhance safety performance. Consequently, we propose a simple andsafe prompt engineering method (SSP) to improve image generation quality byproviding optimal camera descriptions. Specifically, we create a dataset frommulti-datasets as original prompts. To select the optimal camera, we design anoptimal camera matching approach and implement a classifier for originalprompts capable of automatically matching. Appending camera descriptions tooriginal prompts generates optimized prompts for further LVM image generation.Experiments demonstrate that SSP improves semantic consistency by an average of16% compared to others and safety metrics by 48.9%.",,arXiv,['cs.cv'],highly relevant,"The paper focuses on the effect of different prompts on LLM personality scores, directly engaging with prompt engineering concepts." llms for robotic object disambiguation,"['Connie Jiang', 'Yiqing Xu', 'David Hsu']",http://arxiv.org/pdf/2401.03388v1.pdf,2024-01-07,," The advantages of pre-trained large language models (LLMs) are apparent in avariety of language processing tasks. But can a language model's knowledge befurther harnessed to effectively disambiguate objects and navigatedecision-making challenges within the realm of robotics? Our study reveals theLLM's aptitude for solving complex decision making challenges that are oftenpreviously modeled by Partially Observable Markov Decision Processes (POMDPs).A pivotal focus of our research is the object disambiguation capability ofLLMs. We detail the integration of an LLM into a tabletop environmentdisambiguation task, a decision making problem where the robot's task is todiscern and retrieve a user's desired object from an arbitrarily large andcomplex cluster of objects. Despite multiple query attempts with zero-shotprompt engineering (details can be found in the Appendix), the LLM struggled toinquire about features not explicitly provided in the scene description. Inresponse, we have developed a few-shot prompt engineering system to improve theLLM's ability to pose disambiguating queries. The result is a model capable ofboth using given features when they are available and inferring new relevantfeatures when necessary, to successfully generate and navigate down a precisedecision tree to the correct object--even when faced with identical options.",,arXiv,"['cs.ro', 'cs.cl', 'cs.lg']",highly relevant,"The paper focuses on a method for label alignment in multimodal prompt learning, mentioning the improvement of prompt tuning methods, which falls directly within the scope of prompt engineering." "a promptengineered large language model, deep learning workflow for materials classification","['Siyu Liu', 'Tongqi Wen', 'A. S. L. Subrahmanyam Pattamatta', 'David J. Srolovitz']",http://arxiv.org/pdf/2401.17788v1.pdf,2024-01-31,," With the advent of ChatGPT, large language models (LLMs) have demonstratedconsiderable progress across a wide array of domains. Owing to the extensivenumber of parameters and training data in LLMs, these models inherentlyencompass an expansive and comprehensive materials knowledge database, farexceeding the capabilities of individual researcher. Nonetheless, devisingmethods to harness the knowledge embedded within LLMs for the design anddiscovery of novel materials remains a formidable challenge. In this study, weintroduce a general approach for addressing materials classification problems,which incorporates LLMs, prompt engineering, and deep learning algorithms.Utilizing a dataset of metallic glasses as a case study, our methodologyachieved an improvement of up to 463% in prediction accuracy compared toconventional classification models. These findings underscore the potential ofleveraging textual knowledge generated by LLMs for materials especially withsparse datasets, thereby promoting innovation in materials discovery anddesign.",,arXiv,['cond-mat.mtrl-sci'],somewhat relevant,"The paper describes using a prompt template to convert image quality scores into text descriptions for a model, indicating the use of prompts in the process." the effect of sampling temperature on problem solving in large language models,"['Matthew Renze', 'Erhan Guven']",http://arxiv.org/pdf/2402.05201v1.pdf,2024-02-07,," In this research study, we empirically investigate the effect of samplingtemperature on the performance of Large Language Models (LLMs) on variousproblem-solving tasks. We created a multiple-choice question-and-answer (MCQA)exam by randomly sampling problems from standard LLM benchmarks. Then, we usedfour popular LLMs with five prompt-engineering techniques to solve the MCQAproblems while increasing the sampling temperature from 0.0 to 1.0. Despiteanecdotal reports to the contrary, our empirical results indicate that changesin temperature in the range 0.0 to 1.0 do not have a statistically significantimpact on LLM performance for problem-solving tasks. In addition, these resultsappear to hold regardless of the LLM, the prompt-engineering technique, or theproblem domain. All code, data, and supplemental materials are available onGitHub at: https://github.com/matthewrenze/jhu-llm-temperature.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,The paper's use of 'task-specific prompt template' for code compilation with ChatGPT indicates its focus on prompt engineering. using large language models to automate and expedite reinforcement learning with reward machine,"['Shayan Meshkat Alsadat', 'Jean-Raphael Gaglione', 'Daniel Neider', 'Ufuk Topcu', 'Zhe Xu']",http://arxiv.org/pdf/2402.07069v1.pdf,2024-02-11,," We present LARL-RM (Large language model-generated Automaton forReinforcement Learning with Reward Machine) algorithm in order to encodehigh-level knowledge into reinforcement learning using automaton to expeditethe reinforcement learning. Our method uses Large Language Models (LLM) toobtain high-level domain-specific knowledge using prompt engineering instead ofproviding the reinforcement learning algorithm directly with the high-levelknowledge which requires an expert to encode the automaton. We usechain-of-thought and few-shot methods for prompt engineering and demonstratethat our method works using these approaches. Additionally, LARL-RM allows forfully closed-loop reinforcement learning without the need for an expert toguide and supervise the learning since LARL-RM can use the LLM directly togenerate the required high-level knowledge for the task at hand. We also showthe theoretical guarantee of our algorithm to converge to an optimal policy. Wedemonstrate that LARL-RM speeds up the convergence by 30% by implementing ourmethod in two case studies.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",highly relevant,"The abstract mentions the creation of appropriate prompt templates for LLMs to suggest medications, which indicates the paper discusses prompt engineering related to hard prefix prompting." generative ai has lowered the barriers to computational social sciences,['Yongjun Zhang'],http://arxiv.org/pdf/2311.10833v1.pdf,2023-11-17,," Generative artificial intelligence (AI) has revolutionized the field ofcomputational social science, unleashing new possibilities for analyzingmultimodal data, especially for scholars who may not have extensive programmingexpertise. This breakthrough carries profound implications for the realm ofsocial sciences. Firstly, generative AI can significantly enhance theproductivity of social scientists by automating the generation, annotation, anddebugging of code. Secondly, it empowers researchers to delve intosophisticated data analysis through the innovative use of prompt engineering.Lastly, the educational sphere of computational social science stands tobenefit immensely from these tools, given their exceptional ability to annotateand elucidate complex codes for learners, thereby simplifying the learningprocess and making the technology more accessible.",,arXiv,"['cs.hc', 'cs.cy']",highly relevant,"The paper focuses on adversarial prompts which are a form of prompt engineering, specifically targeting the manipulation of LLMs via prompt hacking." loke linked open knowledge extraction for automated knowledge graph construction,['Jamie McCusker'],http://arxiv.org/pdf/2311.09366v1.pdf,2023-11-15,," While the potential of Open Information Extraction (Open IE) for KnowledgeGraph Construction (KGC) may seem promising, we find that the alignment of OpenIE extraction results with existing knowledge graphs to be inadequate. Theadvent of Large Language Models (LLMs), especially the commercially availableOpenAI models, have reset expectations for what is possible with deep learningmodels and have created a new field called prompt engineering. We investigatethe use of GPT models and prompt engineering for knowledge graph constructionwith the Wikidata knowledge graph to address a similar problem to Open IE,which we call Open Knowledge Extraction (OKE) using an approach we call theLinked Open Knowledge Extractor (LOKE, pronounced like ""Loki""). We consider theentity linking task essential to construction of real world knowledge graphs.We merge the CaRB benchmark scoring approach with data from the TekGen datasetfor the LOKE task. We then show that a well engineered prompt, paired with anaive entity linking approach (which we call LOKE-GPT), outperforms AllenAI'sOpenIE 4 implementation on the OKE task, although it over-generates triplescompared to the reference set due to overall triple scarcity in the TekGen set.Through an analysis of entity linkability in the CaRB dataset, as well asoutputs from OpenIE 4 and LOKE-GPT, we see that LOKE-GPT and the ""silver""TekGen triples show that the task is significantly different in content fromOIE, if not structure. Through this analysis and a qualitative analysis ofsentence extractions via all methods, we found that LOKE-GPT extractions are ofhigh utility for the KGC task and suitable for use in semi-automated extractionsettings.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper focuses on using anchored prompts for improving hypernym extraction from BERT, directly relating to the utilization of prompt engineering techniques." texttosticker style tailoring latent diffusion models for human expression,"['Animesh Sinha', 'Bo Sun', 'Anmol Kalia', 'Arantxa Casanova', 'Elliot Blanchard', 'David Yan', 'Winnie Zhang', 'Tony Nelli', 'Jiahui Chen', 'Hardik Shah', 'Licheng Yu', 'Mitesh Kumar Singh', 'Ankit Ramchandani', 'Maziar Sanjabi', 'Sonal Gupta', 'Amy Bearman', 'Dhruv Mahajan']",http://arxiv.org/pdf/2311.10794v1.pdf,2023-11-17,," We introduce Style Tailoring, a recipe to finetune Latent Diffusion Models(LDMs) in a distinct domain with high visual quality, prompt alignment andscene diversity. We choose sticker image generation as the target domain, asthe images significantly differ from photorealistic samples typically generatedby large-scale LDMs. We start with a competent text-to-image model, like Emu,and show that relying on prompt engineering with a photorealistic model togenerate stickers leads to poor prompt alignment and scene diversity. Toovercome these drawbacks, we first finetune Emu on millions of sticker-likeimages collected using weak supervision to elicit diversity. Next, we curatehuman-in-the-loop (HITL) Alignment and Style datasets from model generations,and finetune to improve prompt alignment and style alignment respectively.Sequential finetuning on these datasets poses a tradeoff between better stylealignment and prompt alignment gains. To address this tradeoff, we propose anovel fine-tuning method called Style Tailoring, which jointly fits the contentand style distribution and achieves best tradeoff. Evaluation results show ourmethod improves visual quality by 14%, prompt alignment by 16.2% and scenediversity by 15.3%, compared to prompt engineering the base Emu model forstickers generation.",,arXiv,['cs.cv'],highly relevant,"The paper explicitly mentions exploring how prompt engineering can impact a model's reading comprehension ability, directly aligning with the topic of prompt engineering." prompt engineeringassisted malware dynamic analysis using gpt4,"['Pei Yan', 'Shunquan Tan', 'Miaohui Wang', 'Jiwu Huang']",http://arxiv.org/pdf/2312.08317v1.pdf,2023-12-13,," Dynamic analysis methods effectively identify shelled, wrapped, or obfuscatedmalware, thereby preventing them from invading computers. As a significantrepresentation of dynamic malware behavior, the API (Application ProgrammingInterface) sequence, comprised of consecutive API calls, has progressivelybecome the dominant feature of dynamic analysis methods. Though there have beennumerous deep learning models for malware detection based on API sequences, thequality of API call representations produced by those models is limited. Thesemodels cannot generate representations for unknown API calls, which weakensboth the detection performance and the generalization. Further, the conceptdrift phenomenon of API calls is prominent. To tackle these issues, weintroduce a prompt engineering-assisted malware dynamic analysis using GPT-4.In this method, GPT-4 is employed to create explanatory text for each API callwithin the API sequence. Afterward, the pre-trained language model BERT is usedto obtain the representation of the text, from which we derive therepresentation of the API sequence. Theoretically, this proposed method iscapable of generating representations for all API calls, excluding thenecessity for dataset training during the generation process. Utilizing therepresentation, a CNN-based detection model is designed to extract the feature.We adopt five benchmark datasets to validate the performance of the proposedmodel. The experimental results reveal that the proposed detection algorithmperforms better than the state-of-the-art method (TextCNN). Specifically, incross-database experiments and few-shot learning experiments, the proposedmodel achieves excellent detection performance and almost a 100% recall ratefor malware, verifying its superior generalization performance. The code isavailable at: github.com/yan-scnu/Prompted_Dynamic_Detection.",,arXiv,"['cs.cr', 'cs.ai']",highly relevant,"The abstract mentions the ability to customize with prompt engineering, directly linking it to the topic." prompting hard or hardly prompting prompt inversion for texttoimage diffusion models,"['Shweta Mahajan', 'Tanzila Rahman', 'Kwang Moo Yi', 'Leonid Sigal']",http://arxiv.org/pdf/2312.12416v1.pdf,2023-12-19,," The quality of the prompts provided to text-to-image diffusion modelsdetermines how faithful the generated content is to the user's intent, oftenrequiring `prompt engineering'. To harness visual concepts from target imageswithout prompt engineering, current approaches largely rely on embeddinginversion by optimizing and then mapping them to pseudo-tokens. However,working with such high-dimensional vector representations is challengingbecause they lack semantics and interpretability, and only allow simple vectoroperations when using them. Instead, this work focuses on inverting thediffusion model to obtain interpretable language prompts directly. Thechallenge of doing this lies in the fact that the resulting optimizationproblem is fundamentally discrete and the space of prompts is exponentiallylarge; this makes using standard optimization techniques, such as stochasticgradient descent, difficult. To this end, we utilize a delayed projectionscheme to optimize for prompts representative of the vocabulary space in themodel. Further, we leverage the findings that different timesteps of thediffusion process cater to different levels of detail in an image. The later,noisy, timesteps of the forward diffusion process correspond to the semanticinformation, and therefore, prompt inversion in this range provides tokensrepresentative of the image semantics. We show that our approach can identifysemantically interpretable and meaningful prompts for a target image which canbe used to synthesize diverse images with similar content. We furtherillustrate the application of the optimized prompts in evolutionary imagegeneration and concept removal.",,arXiv,"['cs.cv', 'cs.lg']",highly relevant,"The abstract mentions using simple prompt engineering to take the user emotion into consideration for improving ChatGPT's empathetic responses, indicating direct relevance to the topic of prompt engineering." typefly flying drones with large language model,"['Guojun Chen', 'Xiaojing Yu', 'Lin Zhong']",http://arxiv.org/pdf/2312.14950v1.pdf,2023-12-08,," Commanding a drone with a natural language is not only user-friendly but alsoopens the door for emerging language agents to control the drone. Emerginglarge language models (LLMs) provide a previously impossible opportunity toautomatically translate a task description in a natural language to a programthat can be executed by the drone. However, powerful LLMs and their visioncounterparts are limited in three important ways. First, they are onlyavailable as cloud-based services. Sending images to the cloud raises privacyconcerns. Second, they are expensive, costing proportionally to the requestsize. Finally, without expensive fine-tuning, existing LLMs are quite limitedin their capability of writing a program for specialized systems like drones. In this paper, we present a system called TypeFly that tackles the abovethree problems using a combination of edge-based vision intelligence, novelprogramming language design, and prompt engineering. Instead of the familiarPython, TypeFly gets a cloud-based LLM service to write a program in a small,custom language called MiniSpec, based on task and scene descriptions inEnglish. Such MiniSpec programs are not only succinct (and therefore efficient)but also able to consult the LLM during their execution using a special skillcalled query. Using a set of increasingly challenging drone tasks, we show thatdesign choices made by TypeFly can reduce both the cost of LLM service and thetask execution time by more than 2x. More importantly, query and promptengineering techniques contributed by TypeFly significantly improve the chanceof success of complex tasks.",,arXiv,"['cs.ro', 'cs.ai', 'cs.hc']",highly relevant,"The paper directly addresses prompt engineering by evaluating different prompting strategies for improving Large Language Model performance, specifically focusing on ChatGPT." prompting large language models for recommender systems a comprehensive framework and empirical analysis,"['Lanling Xu', 'Junjie Zhang', 'Bingqian Li', 'Jinpeng Wang', 'Mingchen Cai', 'Wayne Xin Zhao', 'Ji-Rong Wen']",http://arxiv.org/pdf/2401.04997v1.pdf,2024-01-10,," Recently, large language models such as ChatGPT have showcased remarkableabilities in solving general tasks, demonstrating the potential forapplications in recommender systems. To assess how effectively LLMs can be usedin recommendation tasks, our study primarily focuses on employing LLMs asrecommender systems through prompting engineering. We propose a generalframework for utilizing LLMs in recommendation tasks, focusing on thecapabilities of LLMs as recommenders. To conduct our analysis, we formalize theinput of LLMs for recommendation into natural language prompts with two keyaspects, and explain how our framework can be generalized to variousrecommendation scenarios. As for the use of LLMs as recommenders, we analyzethe impact of public availability, tuning strategies, model architecture,parameter scale, and context length on recommendation results based on theclassification of LLMs. As for prompt engineering, we further analyze theimpact of four important components of prompts, \ie task descriptions, userinterest modeling, candidate items construction and prompting strategies. Ineach section, we first define and categorize concepts in line with the existingliterature. Then, we propose inspiring research questions followed byexperiments to systematically analyze the impact of different factors on twopublic datasets. Finally, we summarize promising directions to shed lights onfuture research.",,arXiv,['cs.ir'],highly relevant,"The paper mentions utilizing generative pretrained transformers (GPTs) with prompt engineering for zero-shot and few-shot learning scenarios, directly connecting to the topic of prompt engineering." pokergpt an endtoend lightweight solver for multiplayer texas hold'em via large language model,"['Chenghao Huang', 'Yanbo Cao', 'Yinlong Wen', 'Tao Zhou', 'Yanru Zhang']",http://arxiv.org/pdf/2401.06781v1.pdf,2024-01-04,," Poker, also known as Texas Hold'em, has always been a typical research targetwithin imperfect information games (IIGs). IIGs have long served as a measureof artificial intelligence (AI) development. Representative prior works, suchas DeepStack and Libratus heavily rely on counterfactual regret minimization(CFR) to tackle heads-up no-limit Poker. However, it is challenging forsubsequent researchers to learn CFR from previous models and apply it to otherreal-world applications due to the expensive computational cost of CFRiterations. Additionally, CFR is difficult to apply to multi-player games dueto the exponential growth of the game tree size. In this work, we introducePokerGPT, an end-to-end solver for playing Texas Hold'em with arbitrary numberof players and gaining high win rates, established on a lightweight largelanguage model (LLM). PokerGPT only requires simple textual information ofPoker games for generating decision-making advice, thus guaranteeing theconvenient interaction between AI and humans. We mainly transform a set oftextual records acquired from real games into prompts, and use them tofine-tune a lightweight pre-trained LLM using reinforcement learning humanfeedback technique. To improve fine-tuning performance, we conduct promptengineering on raw data, including filtering useful information, selectingbehaviors of players with high win rates, and further processing them intotextual instruction using multiple prompt engineering techniques. Through theexperiments, we demonstrate that PokerGPT outperforms previous approaches interms of win rate, model size, training time, and response speed, indicatingthe great potential of LLMs in solving IIGs.",,arXiv,"['cs.ai', 'cs.cl']",highly relevant,"The paper directly addresses the use of prompt engineering for text anomaly detection and evaluates the performance of different prompting models, making it highly relevant to the topic." prewrite prompt rewriting with reinforcement learning,"['Weize Kong', 'Spurthi Amba Hombaiah', 'Mingyang Zhang', 'Qiaozhu Mei', 'Michael Bendersky']",http://arxiv.org/pdf/2401.08189v1.pdf,2024-01-16,," Prompt engineering is critical for the development of LLM-based applications.However, it is usually done manually in a ""trial and error"" fashion. Thismanual procedure can be time consuming, ineffective, and the generated promptsare, in a lot of cases, sub-optimal. Even for the prompts which seemingly workwell, there is always a lingering question: can the prompts be made better withfurther modifications? To address these questions, in this paper, we investigate prompt engineeringautomation. We consider a specific use case scenario in which developers/usershave drafted initial prompts, but lack the time/expertise to optimize them. Wepropose PRewrite, an automated tool to rewrite these drafts and to generatehighly effective new prompts. PRewrite is based on the Reinforcement Learning(RL) framework which allows for end-to-end optimization and our design allowsthe RL search to happen in a large action space. The automated tool leveragesmanually crafted prompts as starting points which makes the rewriting proceduremore guided and efficient. The generated prompts are human readable, andself-explanatory, unlike some of those in previous works. We conductedextensive experiments on diverse datasets and found that the prompts generatedwith this new method not only outperform professionally crafted prompts, butalso prompts generated with other previously proposed methods.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",highly relevant,"The paper discusses 'instruction-augmented MT using GPT4-LLM' and 'HMT-augmented translation', highlighting the use of prompt engineering to enhance machine translation by including AI or human-generated instructions, which aligns with the interest in hard prefix prompting techniques." fewshot learning for chronic disease management leveraging large language models and multiprompt engineering with medical knowledge injection,"['Haoxin Liu', 'Wenli Zhang', 'Jiaheng Xie', 'Buomsoo Kim', 'Zhu Zhang', 'Yidong Chai']",http://arxiv.org/pdf/2401.12988v1.pdf,2024-01-16,," This study harnesses state-of-the-art AI technology for chronic diseasemanagement, specifically in detecting various mental disorders throughuser-generated textual content. Existing studies typically rely on fullysupervised machine learning, which presents challenges such as thelabor-intensive manual process of annotating extensive training data for eachdisease and the need to design specialized deep learning architectures for eachproblem. To address such challenges, we propose a novel framework thatleverages advanced AI techniques, including large language models andmulti-prompt engineering. Specifically, we address two key technical challengesin data-driven chronic disease management: (1) developing personalized promptsto represent each user's uniqueness and (2) incorporating medical knowledgeinto prompts to provide context for chronic disease detection, instructlearning objectives, and operationalize prediction goals. We evaluate ourmethod using four mental disorders, which are prevalent chronic diseasesworldwide, as research cases. On the depression detection task, our method (F1= 0.975~0.978) significantly outperforms traditional supervised learningparadigms, including feature engineering (F1 = 0.760) and architectureengineering (F1 = 0.756). Meanwhile, our approach demonstrates success infew-shot learning, i.e., requiring only a minimal number of training examplesto detect chronic diseases based on user-generated textual content (i.e., only2, 10, or 100 subjects). Moreover, our method can be generalized to othermental disorder detection tasks, including anorexia, pathological gambling, andself-harm (F1 = 0.919~0.978).",,arXiv,"['cs.cl', 'cs.ai', 'k.5', 'i.2.7; h.4.m']",highly relevant,"The paper directly mentions the use of prompt engineering to study the performance of large language models, making it relevant to the topic." multilingual texttoimage generation magnifies gender stereotypes and prompt engineering may not help you,"['Felix Friedrich', 'Katharina Hämmerl', 'Patrick Schramowski', 'Jindrich Libovicky', 'Kristian Kersting', 'Alexander Fraser']",http://arxiv.org/pdf/2401.16092v2.pdf,2024-01-29,," Text-to-image generation models have recently achieved astonishing results inimage quality, flexibility, and text alignment and are consequently employed ina fast-growing number of applications. Through improvements in multilingualabilities, a larger community now has access to this kind of technology. Yet,as we will show, multilingual models suffer similarly from (gender) biases asmonolingual models. Furthermore, the natural expectation is that these modelswill provide similar results across languages, but this is not the case andthere are important differences between languages. Thus, we propose a novelbenchmark MAGBIG intending to foster research in multilingual models withoutgender bias. We investigate whether multilingual T2I models magnify gender biaswith MAGBIG. To this end, we use multilingual prompts requesting portraitimages of persons of a certain occupation or trait (using adjectives). Ourresults show not only that models deviate from the normative assumption thateach gender should be equally likely to be generated, but that there are alsobig differences across languages. Furthermore, we investigate promptengineering strategies, i.e. the use of indirect, neutral formulations, as apossible remedy for these biases. Unfortunately, they help only to a limitedextent and result in worse text-to-image alignment. Consequently, this workcalls for more research into diverse representations across languages in imagegenerators.",,arXiv,"['cs.cl', 'cs.cy', 'cs.lg']",highly relevant,"The paper focuses on prompt-based experiments with GPT language models for document-level machine translation, which directly pertains to prompt engineering." access prompt engineering for automated web accessibility violation corrections,"['Calista Huang', 'Alyssa Ma', 'Suchir Vyasamudri', 'Eugenie Puype', 'Sayem Kamal', 'Juan Belza Garcia', 'Salar Cheema', 'Michael Lutz']",http://arxiv.org/pdf/2401.16450v2.pdf,2024-01-28,," With the increasing need for inclusive and user-friendly technology, webaccessibility is crucial to ensuring equal access to online content forindividuals with disabilities, including visual, auditory, cognitive, or motorimpairments. Despite the existence of accessibility guidelines and standardssuch as Web Content Accessibility Guidelines (WCAG) and the Web AccessibilityInitiative (W3C), over 90% of websites still fail to meet the necessaryaccessibility requirements. For web users with disabilities, there exists aneed for a tool to automatically fix web page accessibility errors. Whileresearch has demonstrated methods to find and target accessibility errors, noresearch has focused on effectively correcting such violations. This paperpresents a novel approach to correcting accessibility violations on the web bymodifying the document object model (DOM) in real time with foundation models.Leveraging accessibility error information, large language models (LLMs), andprompt engineering techniques, we achieved greater than a 51% reduction inaccessibility violation errors after corrections on our novel benchmark:ACCESS. Our work demonstrates a valuable approach toward the direction ofinclusive web content, and provides directions for future research to exploreadvanced methods to automate web accessibility.",,arXiv,"['cs.hc', 'cs.ai', 'cs.se']",highly relevant,"The paper explicitly investigates the effects of integrating specific information into prompts on translation quality, which aligns with the study of hard prefix prompting in prompt engineering." exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning,"['Tong Wan', 'Zhongzhou Chen']",http://arxiv.org/pdf/2311.06180v1.pdf,2023-11-10,," Instructor's feedback plays a critical role in students' development ofconceptual understanding and reasoning skills. However, grading student writtenresponses and providing personalized feedback can take a substantial amount oftime. In this study, we explore using GPT-3.5 to write feedback to studentwritten responses to conceptual questions with prompt engineering and few-shotlearning techniques. In stage one, we used a small portion (n=20) of thestudent responses on one conceptual question to iteratively train GPT. Four ofthe responses paired with human-written feedback were included in the prompt asexamples for GPT. We tasked GPT to generate feedback to the other 16 responses,and we refined the prompt after several iterations. In stage two, we gave fourstudent researchers the 16 responses as well as two versions of feedback, onewritten by the authors and the other by GPT. Students were asked to rate thecorrectness and usefulness of each feedback, and to indicate which one wasgenerated by GPT. The results showed that students tended to rate the feedbackby human and GPT equally on correctness, but they all rated the feedback by GPTas more useful. Additionally, the successful rates of identifying GPT'sfeedback were low, ranging from 0.1 to 0.6. In stage three, we tasked GPT togenerate feedback to the rest of the student responses (n=65). The feedback wasrated by four instructors based on the extent of modification needed if theywere to give the feedback to students. All the instructors rated approximately70% of the feedback statements needing only minor or no modification. Thisstudy demonstrated the feasibility of using Generative AI as an assistant togenerating feedback for student written responses with only a relatively smallnumber of examples. An AI assistance can be one of the solutions tosubstantially reduce time spent on grading student written responses.",,arXiv,['physics.ed-ph'],highly relevant,"The paper explicitly mentions the use of prompting applied to large, encoder-decoder pre-trained language models, which indicates its relevance to the topic of prompt engineering." "topologies of reasoning demystifying chains, trees, and graphs of thoughts","['Maciej Besta', 'Florim Memedi', 'Zhenyu Zhang', 'Robert Gerstenberger', 'Nils Blach', 'Piotr Nyczyk', 'Marcin Copik', 'Grzegorz Kwaśniewski', 'Jürgen Müller', 'Lukas Gianinazzi', 'Ales Kubicek', 'Hubert Niewiadomski', 'Onur Mutlu', 'Torsten Hoefler']",http://arxiv.org/pdf/2401.14295v1.pdf,2024-01-25,," The field of natural language processing (NLP) has witnessed significantprogress in recent years, with a notable focus on improving large languagemodels' (LLM) performance through innovative prompting techniques. Among these,prompt engineering coupled with structures has emerged as a promising paradigm,with designs such as Chain-of-Thought, Tree of Thoughts, or Graph of Thoughts,in which the overall LLM reasoning is guided by a structure such as a graph. Asillustrated with numerous examples, this paradigm significantly enhances theLLM's capability to solve numerous tasks, ranging from logical or mathematicalreasoning to planning or creative writing. To facilitate the understanding ofthis growing field and pave the way for future developments, we devise ageneral blueprint for effective and efficient LLM reasoning schemes. For this,we conduct an in-depth analysis of the prompt execution pipeline, clarifyingand clearly defining different concepts. We then build the first taxonomy ofstructure-enhanced LLM reasoning schemes. We focus on identifying fundamentalclasses of harnessed structures, and we analyze the representations of thesestructures, algorithms executed with these structures, and many others. Werefer to these structures as reasoning topologies, because their representationbecomes to a degree spatial, as they are contained within the LLM context. Ourstudy compares existing prompting schemes using the proposed taxonomy,discussing how certain design choices lead to different patterns in performanceand cost. We also outline theoretical underpinnings, relationships betweenprompting and others parts of the LLM ecosystem such as knowledge bases, andthe associated research challenges. Our work will help to advance future promptengineering techniques.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",somewhat relevant,"The paper mentions using prompting in conjunction with adapter-based pretraining for vision-language tasks, indicating relevance to prompt engineering." a preliminary study on using large language models in software pentesting,"['Kumar Shashwat', 'Francis Hahn', 'Xinming Ou', 'Dmitry Goldgof', 'Lawrence Hall', 'Jay Ligatti', 'S. Raj Rajgopalan', 'Armin Ziaie Tabari']",http://arxiv.org/pdf/2401.17459v1.pdf,2024-01-30,," Large language models (LLM) are perceived to offer promising potentials forautomating security tasks, such as those found in security operation centers(SOCs). As a first step towards evaluating this perceived potential, weinvestigate the use of LLMs in software pentesting, where the main task is toautomatically identify software security vulnerabilities in source code. Wehypothesize that an LLM-based AI agent can be improved over time for a specificsecurity task as human operators interact with it. Such improvement can bemade, as a first step, by engineering prompts fed to the LLM based on theresponses produced, to include relevant contexts and structures so that themodel provides more accurate results. Such engineering efforts becomesustainable if the prompts that are engineered to produce better results oncurrent tasks, also produce better results on future unknown tasks. To examinethis hypothesis, we utilize the OWASP Benchmark Project 1.2 which contains2,740 hand-crafted source code test cases containing various types ofvulnerabilities. We divide the test cases into training and testing data, wherewe engineer the prompts based on the training data (only), and evaluate thefinal system on the testing data. We compare the AI agent's performance on thetesting data against the performance of the agent without the promptengineering. We also compare the AI agent's results against those fromSonarQube, a widely used static code analyzer for security testing. We builtand tested multiple versions of the AI agent using different off-the-shelf LLMs-- Google's Gemini-pro, as well as OpenAI's GPT-3.5-Turbo and GPT-4-Turbo (withboth chat completion and assistant APIs). The results show that using LLMs is aviable approach to build an AI agent for software pentesting that can improvethrough repeated use and prompt engineering.",,arXiv,"['cs.cr', 'cs.ai']",highly relevant,"The paper explicitly mentions leveraging prompt engineering and zero-shot/few-shot learning methodologies for emotion detection, making it highly relevant to the topic of prompt engineering." how are prompts different in terms of sensitivity,"['Sheng Lu', 'Hendrik Schuff', 'Iryna Gurevych']",http://arxiv.org/pdf/2311.07230v1.pdf,2023-11-13,," In-context learning (ICL) has become one of the most popular learningparadigms. While there is a growing body of literature focusing on promptengineering, there is a lack of systematic analysis comparing the effects ofprompts across different models and tasks. To address this gap, we present acomprehensive prompt analysis based on the sensitivity of a function. Ouranalysis reveals that sensitivity is an unsupervised proxy for modelperformance, as it exhibits a strong negative correlation with accuracy. We usegradient-based saliency scores to empirically demonstrate how different promptsaffect the relevance of input tokens to the output, resulting in differentlevels of sensitivity. Furthermore, we introduce sensitivity-aware decodingwhich incorporates sensitivity estimation as a penalty term in the standardgreedy decoding. We show that this approach is particularly helpful wheninformation in the input is scarce. Our work provides a fresh perspective onthe analysis of prompts, and contributes to a better understanding of themechanism of ICL.",,arXiv,['cs.cl'],highly relevant,"The paper investigates the impact of prompt elements on model behavior using Chain-of-thought prompting, which directly relates to the study of prompt engineering." think before you speak cultivating communication skills of large language models via inner monologue,"['Junkai Zhou', 'Liang Pang', 'Huawei Shen', 'Xueqi Cheng']",http://arxiv.org/pdf/2311.07445v1.pdf,2023-11-13,," The emergence of large language models (LLMs) further improves thecapabilities of open-domain dialogue systems and can generate fluent, coherent,and diverse responses. However, LLMs still lack an important ability:communication skills, which makes them more like information seeking tools thananthropomorphic chatbots. To make LLMs more anthropomorphic and proactiveduring the conversation, we add five communication skills to the responsegeneration process: topic transition, proactively asking questions, conceptguidance, empathy, and summarising often. The addition of communication skillsincreases the interest of users in the conversation and attracts them to chatfor longer. To enable LLMs better understand and use communication skills, wedesign and add the inner monologue to LLMs. The complete process is achievedthrough prompt engineering and in-context learning. To evaluate communicationskills, we construct a benchmark named Cskills for evaluating variouscommunication skills, which can also more comprehensively evaluate the dialoguegeneration ability of the model. Experimental results show that the proposedCSIM strategy improves the backbone models and outperforms the baselines inboth automatic and human evaluations.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper mentions using zero-shot and few-shot prompting with large language models for hate speech detection, which indicates it involves prompt engineering techniques." assessing testtime variability for interactive 3d medical image segmentation with diverse point prompts,"['Hao Li', 'Han Liu', 'Dewei Hu', 'Jiacheng Wang', 'Ipek Oguz']",http://arxiv.org/pdf/2311.07806v1.pdf,2023-11-13,," Interactive segmentation model leverages prompts from users to produce robustsegmentation. This advancement is facilitated by prompt engineering, whereinteractive prompts serve as strong priors during test-time. However, this isan inherently subjective and hard-to-reproduce process. The variability in userexpertise and inherently ambiguous boundaries in medical images can lead toinconsistent prompt selections, potentially affecting segmentation accuracy.This issue has not yet been extensively explored for medical imaging. In thispaper, we assess the test-time variability for interactive medical imagesegmentation with diverse point prompts. For a given target region, the pointis classified into three sub-regions: boundary, margin, and center. Our goal isto identify a straightforward and efficient approach for optimal promptselection during test-time based on three considerations: (1) benefits ofadditional prompts, (2) effects of prompt placement, and (3) strategies foroptimal prompt selection. We conduct extensive experiments on the publicMedical Segmentation Decathlon dataset for challenging colon tumor segmentationtask. We suggest an optimal strategy for prompt selection during test-time,supported by comprehensive results. The code is publicly available athttps://github.com/MedICL-VU/variability",,arXiv,['cs.cv'],highly relevant,"The paper discusses the use of few-shot prompting strategies for empathy style transfer, indicating a focus on prompt engineering techniques." i was blind but now i see implementing visionenabled dialogue in social robots,"['Giulio Antonio Abbo', 'Tony Belpaeme']",http://arxiv.org/pdf/2311.08957v1.pdf,2023-11-15,," In the rapidly evolving landscape of human-computer interaction, theintegration of vision capabilities into conversational agents stands as acrucial advancement. This paper presents an initial implementation of adialogue manager that leverages the latest progress in Large Language Models(e.g., GPT-4, IDEFICS) to enhance the traditional text-based prompts withreal-time visual input. LLMs are used to interpret both textual prompts andvisual stimuli, creating a more contextually aware conversational agent. Thesystem's prompt engineering, incorporating dialogue with summarisation of theimages, ensures a balance between context preservation and computationalefficiency. Six interactions with a Furhat robot powered by this system arereported, illustrating and discussing the results obtained. By implementingthis vision-enabled dialogue system, the paper envisions a future whereconversational agents seamlessly blend textual and visual modalities, enablingricher, more context-aware dialogues.",,arXiv,"['cs.ro', 'cs.ai', 'cs.hc']",somewhat relevant,"The paper is somewhat relevant because it mentions 'few-shot prompting methods' as a way to guide models to follow instructions, which directly relates to prompt engineering." simulating opinion dynamics with networks of llmbased agents,"['Yun-Shiuan Chuang', 'Agam Goyal', 'Nikunj Harlalka', 'Siddharth Suresh', 'Robert Hawkins', 'Sijia Yang', 'Dhavan Shah', 'Junjie Hu', 'Timothy T. Rogers']",http://arxiv.org/pdf/2311.09618v2.pdf,2023-11-16,," Accurately simulating human opinion dynamics is crucial for understanding avariety of societal phenomena, including polarization and the spread ofmisinformation. However, the agent-based models (ABMs) commonly used for suchsimulations often over-simplify human behavior. We propose a new approach tosimulating opinion dynamics based on populations of Large Language Models(LLMs). Our findings reveal a strong inherent bias in LLM agents towardsproducing accurate information, leading simulated agents to consensus in linewith scientific reality. This bias limits their utility for understandingresistance to consensus views on issues like climate change. After inducingconfirmation bias through prompt engineering, however, we observed opinionfragmentation in line with existing agent-based modeling and opinion dynamicsresearch. These insights highlight the promise and limitations of LLM agents inthis domain and suggest a path forward: refining LLMs with real-world discourseto better simulate the evolution of human beliefs.",,arXiv,"['physics.soc-ph', 'cs.cl']",highly relevant,"The study focuses on few-shot prompting for machine translation, which directly involves using prompting techniques to improve translation outcomes." fairytalecqa integrating a commonsense knowledge graph into children's storybook narratives,"['Jiaju Chen', 'Yuxuan Lu', 'Shao Zhang', 'Bingsheng Yao', 'Yuanzhe Dong', 'Ying Xu', 'Yunyao Li', 'Qianwen Wang', 'Dakuo Wang', 'Yuling Sun']",http://arxiv.org/pdf/2311.09756v1.pdf,2023-11-16,," AI models (including LLM) often rely on narrative question-answering (QA)datasets to provide customized QA functionalities to support downstreamchildren education applications; however, existing datasets only include QApairs that are grounded within the given storybook content, but children canlearn more when teachers refer the storybook content to real-world knowledge(e.g., commonsense knowledge). We introduce the FairytaleCQA dataset, which isannotated by children education experts, to supplement 278 storybook narrativeswith educationally appropriate commonsense knowledge. The dataset has 5,868 QApairs that not only originate from the storybook narrative but also contain thecommonsense knowledge grounded by an external knowledge graph (i.e.,ConceptNet). A follow-up experiment shows that a smaller model (T5-large)fine-tuned with FairytaleCQA reliably outperforms much larger prompt-engineeredLLM (e.g., GPT-4) in this new QA-pair generation task (QAG). This resultsuggests that: 1) our dataset brings novel challenges to existing LLMs, and 2)human experts' data annotation are still critical as they have much nuancedknowledge that LLMs do not know in the children educational domain.",,arXiv,['cs.cl'],highly relevant,"The paper focuses on using LLMs for translation of text with markup and experiments with zero, one, and few-shot prompting, directly relating to prompt engineering practices." "localizing lying in llama understanding instructed dishonesty on truefalse questions through prompting, probing, and patching","['James Campbell', 'Richard Ren', 'Phillip Guo']",http://arxiv.org/pdf/2311.15131v1.pdf,2023-11-25,," Large language models (LLMs) demonstrate significant knowledge through theiroutputs, though it is often unclear whether false outputs are due to a lack ofknowledge or dishonesty. In this paper, we investigate instructed dishonesty,wherein we explicitly prompt LLaMA-2-70b-chat to lie. We perform promptengineering to find which prompts best induce lying behavior, and then usemechanistic interpretability approaches to localize where in the network thisbehavior occurs. Using linear probing and activation patching, we localize fivelayers that appear especially important for lying. We then find just 46attention heads within these layers that enable us to causally intervene suchthat the lying model instead answers honestly. We show that these interventionswork robustly across many prompts and dataset splits. Overall, our workcontributes a greater understanding of dishonesty in LLMs so that we may hopeto prevent it.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",highly relevant,"The paper discusses formulating prompt templates to transfer inductive biases to improve GPT-4's reasoning, directly engaging with prompt engineering." the transformative influence of large language models on software development,['Sajed Jalil'],http://arxiv.org/pdf/2311.16429v1.pdf,2023-11-28,," The increasing adoption and commercialization of generalized Large LanguageModels (LLMs) have profoundly impacted various aspects of our daily lives.Initially embraced by the computer science community, the versatility of LLMshas found its way into diverse domains. In particular, the software engineeringrealm has witnessed the most transformative changes. With LLMs increasinglyserving as AI Pair Programming Assistants spurred the development ofspecialized models aimed at aiding software engineers. Although this newparadigm offers numerous advantages, it also presents critical challenges andopen problems. To identify the potential and prevailing obstacles, wesystematically reviewed contemporary scholarly publications, emphasizing theperspectives of software developers and usability concerns. Preliminaryfindings underscore pressing concerns about data privacy, bias, andmisinformation. Additionally, we identified several usability challenges,including prompt engineering, increased cognitive demands, and mistrust.Finally, we introduce 12 open problems that we have identified through oursurvey, covering these various domains.",,arXiv,"['cs.se', 'cs.hc', '68t07', 'd.2.3; i.2.5; i.2.7']",somewhat relevant,"The abstract mentions that LLM-based parsers using prompting techniques are vulnerable to SQL injection attacks, indicating that the paper discusses the use of prompts in text-to-SQL systems." "large language models for networking applications, enabling techniques, and challenges","['Yudong Huang', 'Hongyang Du', 'Xinyuan Zhang', 'Dusit Niyato', 'Jiawen Kang', 'Zehui Xiong', 'Shuo Wang', 'Tao Huang']",http://arxiv.org/pdf/2311.17474v1.pdf,2023-11-29,," The rapid evolution of network technologies and the growing complexity ofnetwork tasks necessitate a paradigm shift in how networks are designed,configured, and managed. With a wealth of knowledge and expertise, largelanguage models (LLMs) are one of the most promising candidates. This paperaims to pave the way for constructing domain-adapted LLMs for networking.Firstly, we present potential LLM applications for vertical network fields andshowcase the mapping from natural language to network language. Then, severalenabling technologies are investigated, including parameter-efficientfinetuning and prompt engineering. The insight is that language understandingand tool usage are both required for network LLMs. Driven by the idea ofembodied intelligence, we propose the ChatNet, a domain-adapted network LLMframework with access to various external network tools. ChatNet can reduce thetime required for burdensome network planning tasks significantly, leading to asubstantial improvement in efficiency. Finally, key challenges and futureresearch directions are highlighted.",,arXiv,['cs.ni'],highly relevant,"The paper focuses on designing high-performing prompt templates for zero-shot sentiment analysis, which is directly related to prompt engineering." large language models for travel behavior prediction,"['Baichuan Mo', 'Hanyong Xu', 'Dingyi Zhuang', 'Ruoyun Ma', 'Xiaotong Guo', 'Jinhua Zhao']",http://arxiv.org/pdf/2312.00819v1.pdf,2023-11-30,," Travel behavior prediction is a fundamental task in transportation demandmanagement. The conventional methods for travel behavior prediction rely onnumerical data to construct mathematical models and calibrate model parametersto represent human preferences. Recent advancement in large language models(LLMs) has shown great reasoning abilities to solve complex problems. In thisstudy, we propose to use LLMs to predict travel behavior with promptengineering without data-based parameter learning. Specifically, we carefullydesign our prompts that include 1) task description, 2) travel characteristics,3) individual attributes, and 4) guides of thinking with domain knowledge, andask the LLMs to predict an individual's travel behavior and explain theresults. We select the travel mode choice task as a case study. Results showthat, though no training samples are provided, LLM-based predictions havecompetitive accuracy and F1-score as canonical supervised learning methods suchas multinomial logit, random forest, and neural networks. LLMs can also outputreasons that support their prediction. However, though in most of the cases,the output explanations are reasonable, we still observe cases that violatelogic or with hallucinations.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",highly relevant,"The paper introduces 'DP-Prompt', a mechanism that leverages zero-shot prompting with large language models for privacy preservation, directly involving the concept of prompting, which is central to prompt engineering." promptbench a unified library for evaluation of large language models,"['Kaijie Zhu', 'Qinlin Zhao', 'Hao Chen', 'Jindong Wang', 'Xing Xie']",http://arxiv.org/pdf/2312.07910v2.pdf,2023-12-13,," The evaluation of large language models (LLMs) is crucial to assess theirperformance and mitigate potential security risks. In this paper, we introducePromptBench, a unified library to evaluate LLMs. It consists of several keycomponents that are easily used and extended by researchers: promptconstruction, prompt engineering, dataset and model loading, adversarial promptattack, dynamic evaluation protocols, and analysis tools. PromptBench isdesigned to be an open, general, and flexible codebase for research purposesthat can facilitate original study in creating new benchmarks, deployingdownstream applications, and designing new evaluation protocols. The code isavailable at: https://github.com/microsoft/promptbench and will be continuouslysupported.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",highly relevant,"The focus on 'zero-shot prompting' indicates the paper is directly related to prompt engineering, specifically in the context of zero-shot generalization." learning interpretable queries for explainable image classification with information pursuit,"['Stefan Kolek', 'Aditya Chattopadhyay', 'Kwan Ho Ryan Chan', 'Hector Andrade-Loarca', 'Gitta Kutyniok', 'Réne Vidal']",http://arxiv.org/pdf/2312.11548v1.pdf,2023-12-16,," Information Pursuit (IP) is an explainable prediction algorithm that greedilyselects a sequence of interpretable queries about the data in order ofinformation gain, updating its posterior at each step based on observedquery-answer pairs. The standard paradigm uses hand-crafted dictionaries ofpotential data queries curated by a domain expert or a large language modelafter a human prompt. However, in practice, hand-crafted dictionaries arelimited by the expertise of the curator and the heuristics of promptengineering. This paper introduces a novel approach: learning a dictionary ofinterpretable queries directly from the dataset. Our query dictionary learningproblem is formulated as an optimization problem by augmenting IP's variationalformulation with learnable dictionary parameters. To formulate learnable andinterpretable queries, we leverage the latent space of large vision andlanguage models like CLIP. To solve the optimization problem, we propose a newquery dictionary learning algorithm inspired by classical sparse dictionarylearning. Our experiments demonstrate that learned dictionaries significantlyoutperform hand-crafted dictionaries generated with large language models.",,arXiv,['cs.cv'],highly relevant,"The paper explores various prompting strategies with GPT-4, including zero-shot and example-based prompting, relevant to prompt engineering." dspy assertions computational constraints for selfrefining language model pipelines,"['Arnav Singhvi', 'Manish Shetty', 'Shangyin Tan', 'Christopher Potts', 'Koushik Sen', 'Matei Zaharia', 'Omar Khattab']",http://arxiv.org/pdf/2312.13382v2.pdf,2023-12-20,," Chaining language model (LM) calls as composable modules is fueling a new wayof programming, but ensuring LMs adhere to important constraints requiresheuristic ""prompt engineering"". We introduce LM Assertions, a programmingconstruct for expressing computational constraints that LMs should satisfy. Weintegrate our constructs into the recent DSPy programming model for LMs, andpresent new strategies that allow DSPy to compile programs with LM Assertionsinto more reliable and accurate systems. We also propose strategies to useassertions at inference time for automatic self-refinement with LMs. We reporton four diverse case studies for text generation and find that LM Assertionsimprove not only compliance with imposed rules but also downstream taskperformance, passing constraints up to 164% more often and generating up to 37%more higher-quality responses. Our reference implementation of LM Assertions isintegrated into DSPy at https://github.com/stanfordnlp/dspy",,arXiv,"['cs.cl', 'cs.ai', 'cs.pl']",highly relevant,"The paper discusses the use of an in-context prompting mechanism for zero-shot persona customization in dialogue models, which falls into the realm of prompt engineering." chatgpt for conversational recommendation refining recommendations by reprompting with feedback,"['Kyle Dylan Spurlock', 'Cagla Acun', 'Esin Saka', 'Olfa Nasraoui']",http://arxiv.org/pdf/2401.03605v1.pdf,2024-01-07,," Recommendation algorithms have been pivotal in handling the overwhelmingvolume of online content. However, these algorithms seldom consider direct userinput, resulting in superficial interaction between them. Efforts have beenmade to include the user directly in the recommendation process throughconversation, but these systems too have had limited interactivity. Recently,Large Language Models (LLMs) like ChatGPT have gained popularity due to theirease of use and their ability to adapt dynamically to various tasks whileresponding to feedback. In this paper, we investigate the effectiveness ofChatGPT as a top-n conversational recommendation system. We build a rigorouspipeline around ChatGPT to simulate how a user might realistically probe themodel for recommendations: by first instructing and then reprompting withfeedback to refine a set of recommendations. We further explore the effect ofpopularity bias in ChatGPT's recommendations, and compare its performance tobaseline models. We find that reprompting ChatGPT with feedback is an effectivestrategy to improve recommendation relevancy, and that popularity bias can bemitigated through prompt engineering.",,arXiv,"['cs.ir', 'cs.ai', 'cs.cl', 'cs.lg', 'i.2.7; h.3.3']",somewhat relevant,"The paper mentions the use of 'in-context learning (ICL)' and 'designing an effective task demonstration,' which suggests the use of prompting techniques, albeit in a multimodal setting." from prompt engineering to prompt science with human in the loop,['Chirag Shah'],http://arxiv.org/pdf/2401.04122v2.pdf,2024-01-01,," As LLMs make their way into many aspects of our lives, one place thatwarrants increased scrutiny with LLM usage is scientific research. Using LLMsfor generating or analyzing data for research purposes is gaining popularity.But when such application is marred with ad-hoc decisions and engineeringsolutions, we need to be concerned about how it may affect that research, itsfindings, or any future works based on that research. We need a more scientificapproach to using LLMs in our research. While there are several active effortsto support more systematic construction of prompts, they are often focused moreon achieving desirable outcomes rather than producing replicable andgeneralizable knowledge with sufficient transparency, objectivity, or rigor.This article presents a new methodology inspired by codebook constructionthrough qualitative methods to address that. Using humans in the loop and amulti-phase verification processes, this methodology lays a foundation for moresystematic, objective, and trustworthy way of applying LLMs for analyzing data.Specifically, we show how a set of researchers can work through a rigorousprocess of labeling, deliberating, and documenting to remove subjectivity andbring transparency and replicability to prompt generation process.",,arXiv,"['cs.hc', 'cs.ai']",highly relevant,"The paper mentions the use of prompt-based learning, focusing on employing instruction finetuning for Few-Shot NER tasks, which directly pertains to hard prefix prompting in prompt engineering." the benefits of a concise chain of thought on problemsolving in large language models,"['Matthew Renze', 'Erhan Guven']",http://arxiv.org/pdf/2401.05618v1.pdf,2024-01-11,," In this paper, we introduce Concise Chain-of-Thought (CCoT) prompting. Wecompared standard CoT and CCoT prompts to see how conciseness impacts responselength and correct-answer accuracy. We evaluated this using GPT-3.5 and GPT-4with a multiple-choice question-and-answer (MCQA) benchmark. CCoT reducedaverage response length by 48.70% for both GPT-3.5 and GPT-4 while having anegligible impact on problem-solving performance. However, on math problems,GPT-3.5 with CCoT incurs a performance penalty of 27.69%. Overall, CCoT leadsto an average per-token cost reduction of 22.67%. These results have practicalimplications for AI systems engineers using LLMs to solve real-world problemswith CoT prompt-engineering techniques. In addition, these results provide moregeneral insight for AI researchers studying the emergent behavior ofstep-by-step reasoning in LLMs.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper discusses evaluating GPT-4's performance on information extraction tasks using in-context learning with prompts, which aligns with the study of prompt engineering." seek for incantations towards accurate texttoimage diffusion synthesis through prompt engineering,"['Chang Yu', 'Junran Peng', 'Xiangyu Zhu', 'Zhaoxiang Zhang', 'Qi Tian', 'Zhen Lei']",http://arxiv.org/pdf/2401.06345v1.pdf,2024-01-12,," The text-to-image synthesis by diffusion models has recently shown remarkableperformance in generating high-quality images. Although performs well forsimple texts, the models may get confused when faced with complex texts thatcontain multiple objects or spatial relationships. To get the desired images, afeasible way is to manually adjust the textual descriptions, i.e., narratingthe texts or adding some words, which is labor-consuming. In this paper, wepropose a framework to learn the proper textual descriptions for diffusionmodels through prompt learning. By utilizing the quality guidance and thesemantic guidance derived from the pre-trained diffusion model, our method caneffectively learn the prompts to improve the matches between the input text andthe generated images. Extensive experiments and analyses have validated theeffectiveness of the proposed method.",,arXiv,['cs.cv'],highly relevant,"The paper employs instruction templates and in-context learning, which are indicative of the use of prompting techniques to improve model performance." icbellm high quality international events data with open source large language models on consumer hardware,"['Rex W. Douglass', 'Thomas Leo Scherer', 'J. Andrés Gannon', 'Erik Gartzke']",http://arxiv.org/pdf/2401.10558v1.pdf,2024-01-19,," The International Crises Behavior Events (ICBe) ontology provides highcoverage over the thoughts, communications, and actions that constituteinternational relations. A major disadvantage of that level of detail is thatit requires large human capital costs to apply it manually to new texts.Whether such an ontolgy is practical for international relations research givenlimited human and financial resources is a pressing concern. We introduce aworking proof of concept showing that ICBe codings can be reliably extractedfrom new texts using the current generation of open source large languagemodels (LLM) running on consumer grade computer hardware. Our solution requiresno finetuning and only limited prompt engineering. We detail our solution andpresent benchmarks against the original ICBe codings. We conclude by discussingthe implications of very high quality event coding of any text being withinreach of individual researchers with limited resources.",,arXiv,['stat.ap'],highly relevant,"The paper focuses on in-context learning, a type of prompting technique, and examines how named entity replacements impact model accuracy, which is directly related to prompt engineering." incontext learning for extreme multilabel classification,"[""Karel D'Oosterlinck"", 'Omar Khattab', 'François Remy', 'Thomas Demeester', 'Chris Develder', 'Christopher Potts']",http://arxiv.org/pdf/2401.12178v1.pdf,2024-01-22,," Multi-label classification problems with thousands of classes are hard tosolve with in-context learning alone, as language models (LMs) might lack priorknowledge about the precise classes or how to assign them, and it is generallyinfeasible to demonstrate every class in a prompt. We propose a generalprogram, $\texttt{Infer--Retrieve--Rank}$, that defines multi-step interactionsbetween LMs and retrievers to efficiently tackle such problems. We implementthis program using the $\texttt{DSPy}$ programming model, which specifiesin-context systems in a declarative manner, and use $\texttt{DSPy}$ optimizersto tune it towards specific datasets by bootstrapping only tens of few-shotexamples. Our primary extreme classification program, optimized separately foreach task, attains state-of-the-art results across three benchmarks (HOUSE,TECH, TECHWOLF). We apply the same program to a benchmark with vastly differentcharacteristics and attain competitive performance as well (BioDEX). Unlikeprior work, our proposed solution requires no finetuning, is easily applicableto new tasks, alleviates prompt engineering, and requires only tens of labeledexamples. Our code is public at https://github.com/KarelDO/xmc.dspy.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper addresses the refinement and application of in-context learning (ICL) templates, which is directly related to prompt engineering." a generalpurpose ai avatar in healthcare,"['Nicholas Yan', 'Gil Alterovitz']",http://arxiv.org/pdf/2401.12981v1.pdf,2024-01-10,," Recent advancements in machine learning and natural language processing haveled to the rapid development of artificial intelligence (AI) as a valuable toolin the healthcare industry. Using large language models (LLMs) asconversational agents or chatbots has the potential to assist doctors indiagnosing patients, detecting early symptoms of diseases, and providing healthadvice to patients. This paper focuses on the role of chatbots in healthcareand explores the use of avatars to make AI interactions more appealing topatients. A framework of a general-purpose AI avatar application isdemonstrated by using a three-category prompt dictionary and prompt improvementmechanism. A two-phase approach is suggested to fine-tune a general-purpose AIlanguage model and create different AI avatars to discuss medical issues withusers. Prompt engineering enhances the chatbot's conversational abilities andpersonality traits, fostering a more human-like interaction with patients.Ultimately, the injection of personality into the chatbot could potentiallyincrease patient engagement. Future directions for research includeinvestigating ways to improve chatbots' understanding of context and ensuringthe accuracy of their outputs through fine-tuning with specialized medical datasets.",,arXiv,['cs.cl'],highly relevant,"The paper discusses exploring various prompt design strategies for employing LLMs in Text-to-SQL tasks, directly addressing prompt engineering." enhance reasoning for large language models in the game werewolf,"['Shuang Wu', 'Liwen Zhu', 'Tao Yang', 'Shiwei Xu', 'Qiang Fu', 'Yang Wei', 'Haobo Fu']",http://arxiv.org/pdf/2402.02330v1.pdf,2024-02-04,," This paper presents an innovative framework that integrates Large LanguageModels (LLMs) with an external Thinker module to enhance the reasoningcapabilities of LLM-based agents. Unlike augmenting LLMs with promptengineering, Thinker directly harnesses knowledge from databases and employsvarious optimization techniques. The framework forms a reasoning hierarchywhere LLMs handle intuitive System-1 tasks such as natural language processing,while the Thinker focuses on cognitive System-2 tasks that require complexlogical analysis and domain-specific knowledge. Our framework is presentedusing a 9-player Werewolf game that demands dual-system reasoning. We introducea communication protocol between LLMs and the Thinker, and train the Thinkerusing data from 18800 human sessions and reinforcement learning. Experimentsdemonstrate the framework's effectiveness in deductive reasoning, speechgeneration, and online game evaluation. Additionally, we fine-tune a 6B LLM tosurpass GPT4 when integrated with the Thinker. This paper also contributes thelargest dataset for social deduction games to date.",,arXiv,"['cs.ai', 'cs.cl']",somewhat relevant,"The abstract mentions the use of in-context learning (ICL) by large language models for tasks without parameter update, which implies the use of prompts, although it does not specify if these are hard prefix prompts." prompting implicit discourse relation annotation,"['Frances Yung', 'Mansoor Ahmad', 'Merel Scholman', 'Vera Demberg']",http://arxiv.org/pdf/2402.04918v1.pdf,2024-02-07,," Pre-trained large language models, such as ChatGPT, archive outstandingperformance in various reasoning tasks without supervised training and werefound to have outperformed crowdsourcing workers. Nonetheless, ChatGPT'sperformance in the task of implicit discourse relation classification, promptedby a standard multiple-choice question, is still far from satisfactory andconsiderably inferior to state-of-the-art supervised approaches. This workinvestigates several proven prompting techniques to improve ChatGPT'srecognition of discourse relations. In particular, we experimented withbreaking down the classification task that involves numerous abstract labelsinto smaller subtasks. Nonetheless, experiment results show that the inferenceaccuracy hardly changes even with sophisticated prompt engineering, suggestingthat implicit discourse relation classification is not yet resolvable underzero-shot or few-shot settings.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper focuses on improving Large Language Models' performance through in-context learning and a progressive revision framework, which is a type of prompt engineering." illuminate a novel approach for depression detection with explainable analysis and proactive therapy using prompt engineering,['Aryan Agrawal'],http://arxiv.org/pdf/2402.05127v1.pdf,2024-02-05,," This paper introduces a novel paradigm for depression detection and treatmentusing advanced Large Language Models (LLMs): Generative Pre-trained Transformer4 (GPT-4), Llama 2 chat, and Gemini. These LLMs are fine-tuned with specializedprompts to diagnose, explain, and suggest therapeutic interventions fordepression. A unique few-shot prompting method enhances the models' ability toanalyze and explain depressive symptoms based on the DSM-5 criteria. In theinteraction phase, the models engage in empathetic dialogue management, drawingfrom resources like PsychDB and a Cognitive Behavioral Therapy (CBT) Guide,fostering supportive interactions with individuals experiencing majordepressive disorders. Additionally, the research introduces the IlluminateDatabase, enriched with various CBT modules, aiding in personalized therapyrecommendations. The study evaluates LLM performance using metrics such as F1scores, Precision, Recall, Cosine similarity, and Recall-Oriented Understudyfor Gisting Evaluation (ROUGE) across different test sets, demonstrating theireffectiveness. This comprehensive approach blends cutting-edge AI withestablished psychological methods, offering new possibilities in mental healthcare and showcasing the potential of LLMs in revolutionizing depressiondiagnosis and treatment strategies.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",highly relevant,"The paper focuses on enhancing LLMs' reasoning ability through CoT-style prompting specifically for text-to-SQL parsing, aligning with the topic of prompt engineering." best practices for text annotation with large language models,['Petter Törnberg'],http://arxiv.org/pdf/2402.05129v1.pdf,2024-02-05,," Large Language Models (LLMs) have ushered in a new era of text annotation, astheir ease-of-use, high accuracy, and relatively low costs have meant thattheir use has exploded in recent months. However, the rapid growth of the fieldhas meant that LLM-based annotation has become something of an academic WildWest: the lack of established practices and standards has led to concerns aboutthe quality and validity of research. Researchers have warned that theostensible simplicity of LLMs can be misleading, as they are prone to bias,misunderstandings, and unreliable results. Recognizing the transformativepotential of LLMs, this paper proposes a comprehensive set of standards andbest practices for their reliable, reproducible, and ethical use. Theseguidelines span critical areas such as model selection, prompt engineering,structured prompting, prompt stability analysis, rigorous model validation, andthe consideration of ethical and legal implications. The paper emphasizes theneed for a structured, directed, and formalized approach to using LLMs, aimingto ensure the integrity and robustness of text annotation practices, andadvocates for a nuanced and critical engagement with LLMs in social scientificresearch.",,arXiv,['cs.cl'],highly relevant,"The paper is focused on the efficiency of in-context demonstrations for prompting, which is a critical aspect of prompt engineering." entgpt linking generative large language models with knowledge bases,"['Yifan Ding', 'Amrit Poudel', 'Qingkai Zeng', 'Tim Weninger', 'Balaji Veeramani', 'Sanmitra Bhattacharya']",http://arxiv.org/pdf/2402.06738v1.pdf,2024-02-09,," The ability of Large Language Models (LLMs) to generate factually correctoutput remains relatively unexplored due to the lack of fact-checking andknowledge grounding during training and inference. In this work, we aim toaddress this challenge through the Entity Disambiguation (ED) task. We firstconsider prompt engineering, and design a three-step hard-prompting method toprobe LLMs' ED performance without supervised fine-tuning (SFT). Overall, theprompting method improves the micro-F_1 score of the original vanilla models bya large margin, on some cases up to 36% and higher, and obtains comparableperformance across 10 datasets when compared to existing methods with SFT. Wefurther improve the knowledge grounding ability through instruction tuning (IT)with similar prompts and responses. The instruction-tuned model not onlyachieves higher micro-F1 score performance as compared to several baselinemethods on supervised entity disambiguation tasks with an average micro-F_1improvement of 2.1% over the existing baseline models, but also obtains higheraccuracy on six Question Answering (QA) tasks in the zero-shot setting. Ourmethodologies apply to both open- and closed-source LLMs.",,arXiv,['cs.cl'],highly relevant,"The paper discusses automatically generating prompts to enhance in-context learning for dialogue evaluation, aligning with hard prefix prompt engineering." ghostwriter augmenting collaborative humanai writing experiences through personalization and agency,"['Catherine Yeh', 'Gonzalo Ramos', 'Rachel Ng', 'Andy Huntington', 'Richard Banks']",http://arxiv.org/pdf/2402.08855v1.pdf,2024-02-13,," Large language models (LLMs) are becoming more prevalent and have found aubiquitous use in providing different forms of writing assistance. However,LLM-powered writing systems can frustrate users due to their limitedpersonalization and control, which can be exacerbated when users lackexperience with prompt engineering. We see design as one way to address thesechallenges and introduce GhostWriter, an AI-enhanced writing design probe whereusers can exercise enhanced agency and personalization. GhostWriter leveragesLLMs to learn the user's intended writing style implicitly as they write, whileallowing explicit teaching moments through manual style edits and annotations.We study 18 participants who use GhostWriter on two different writing tasks,observing that it helps users craft personalized text generations and empowersthem by providing multiple ways to control the system's writing style. Fromthis study, we present insights regarding people's relationship withAI-assisted writing and offer design recommendations for future work.",,arXiv,"['cs.hc', 'cs.ai']",highly relevant,"The paper discusses exploring GPT-3's capability in generating empathetic dialogues through prompt-based in-context learning, which is a direct application of hard prefix prompting in natural language processing tasks." inadequacies of large language model benchmarks in the era of generative artificial intelligence,"['Timothy R. McIntosh', 'Teo Susnjak', 'Tong Liu', 'Paul Watters', 'Malka N. Halgamuge']",http://arxiv.org/pdf/2402.09880v1.pdf,2024-02-15,," The rapid rise in popularity of Large Language Models (LLMs) with emergingcapabilities has spurred public curiosity to evaluate and compare differentLLMs, leading many researchers to propose their LLM benchmarks. Noticingpreliminary inadequacies in those benchmarks, we embarked on a study tocritically assess 23 state-of-the-art LLM benchmarks, using our novel unifiedevaluation framework through the lenses of people, process, and technology,under the pillars of functionality and security. Our research uncoveredsignificant limitations, including biases, difficulties in measuring genuinereasoning, adaptability, implementation inconsistencies, prompt engineeringcomplexity, evaluator diversity, and the overlooking of cultural andideological norms in one comprehensive assessment. Our discussions emphasizedthe urgent need for standardized methodologies, regulatory certainties, andethical guidelines in light of Artificial Intelligence (AI) advancements,including advocating for an evolution from static benchmarks to dynamicbehavioral profiling to accurately capture LLMs' complex behaviors andpotential risks. Our study highlighted the necessity for a paradigm shift inLLM evaluation methodologies, underlining the importance of collaborativeefforts for the development of universally accepted benchmarks and theenhancement of AI systems' integration into society.",,arXiv,"['cs.ai', 'cs.cy', 'cs.hc']",highly relevant,"The paper discusses the strategy of selecting semantically-similar in-context examples to formulate prompts for GPT-3, which is directly related to prompt engineering." chainofthought reasoning without prompting,"['Xuezhi Wang', 'Denny Zhou']",http://arxiv.org/pdf/2402.10200v1.pdf,2024-02-15,," In enhancing the reasoning capabilities of large language models (LLMs),prior research primarily focuses on specific prompting techniques such asfew-shot or zero-shot chain-of-thought (CoT) prompting. These methods, whileeffective, often involve manually intensive prompt engineering. Our study takesa novel approach by asking: Can LLMs reason effectively without prompting? Ourfindings reveal that, intriguingly, CoT reasoning paths can be elicited frompre-trained LLMs by simply altering the \textit{decoding} process. Rather thanconventional greedy decoding, we investigate the top-$k$ alternative tokens,uncovering that CoT paths are frequently inherent in these sequences. Thisapproach not only bypasses the confounders of prompting but also allows us toassess the LLMs' \textit{intrinsic} reasoning abilities. Moreover, we observethat the presence of a CoT in the decoding path correlates with a higherconfidence in the model's decoded answer. This confidence metric effectivelydifferentiates between CoT and non-CoT paths. Extensive empirical studies onvarious reasoning benchmarks show that the proposed CoT-decoding substantiallyoutperforms the standard greedy decoding.",,arXiv,['cs.cl'],highly relevant,The paper's focus on 'Corpus-Specific Prefix Tuning' and proposing a method for improving prefix word information directly pertains to hard prefix prompting in prompt engineering.